00:00:00.000 Started by upstream project "autotest-nightly" build number 4274 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3637 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.163 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.164 The recommended git tool is: git 00:00:00.164 using credential 00000000-0000-0000-0000-000000000002 00:00:00.165 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.208 Fetching changes from the remote Git repository 00:00:00.212 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.254 Using shallow fetch with depth 1 00:00:00.254 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.254 > git --version # timeout=10 00:00:00.287 > git --version # 'git version 2.39.2' 00:00:00.287 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.307 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.307 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.102 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.114 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.126 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:08.126 > git config core.sparsecheckout # timeout=10 00:00:08.137 > git read-tree -mu HEAD # timeout=10 00:00:08.153 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:08.169 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:08.169 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:08.263 [Pipeline] Start of Pipeline 00:00:08.275 [Pipeline] library 00:00:08.276 Loading library shm_lib@master 00:00:08.276 Library shm_lib@master is cached. Copying from home. 00:00:08.288 [Pipeline] node 00:00:08.297 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.299 [Pipeline] { 00:00:08.308 [Pipeline] catchError 00:00:08.309 [Pipeline] { 00:00:08.319 [Pipeline] wrap 00:00:08.326 [Pipeline] { 00:00:08.331 [Pipeline] stage 00:00:08.333 [Pipeline] { (Prologue) 00:00:08.519 [Pipeline] sh 00:00:08.794 + logger -p user.info -t JENKINS-CI 00:00:08.811 [Pipeline] echo 00:00:08.813 Node: GP11 00:00:08.819 [Pipeline] sh 00:00:09.112 [Pipeline] setCustomBuildProperty 00:00:09.123 [Pipeline] echo 00:00:09.124 Cleanup processes 00:00:09.129 [Pipeline] sh 00:00:09.408 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.408 2740834 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.419 [Pipeline] sh 00:00:09.697 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.697 ++ awk '{print $1}' 00:00:09.697 ++ grep -v 'sudo pgrep' 00:00:09.697 + sudo kill -9 00:00:09.697 + true 00:00:09.713 [Pipeline] cleanWs 00:00:09.723 [WS-CLEANUP] Deleting project workspace... 00:00:09.723 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.729 [WS-CLEANUP] done 00:00:09.733 [Pipeline] setCustomBuildProperty 00:00:09.749 [Pipeline] sh 00:00:10.026 + sudo git config --global --replace-all safe.directory '*' 00:00:10.120 [Pipeline] httpRequest 00:00:11.099 [Pipeline] echo 00:00:11.101 Sorcerer 10.211.164.20 is alive 00:00:11.112 [Pipeline] retry 00:00:11.114 [Pipeline] { 00:00:11.128 [Pipeline] httpRequest 00:00:11.132 HttpMethod: GET 00:00:11.132 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:11.133 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:11.144 Response Code: HTTP/1.1 200 OK 00:00:11.144 Success: Status code 200 is in the accepted range: 200,404 00:00:11.145 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:14.394 [Pipeline] } 00:00:14.411 [Pipeline] // retry 00:00:14.419 [Pipeline] sh 00:00:14.699 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:14.714 [Pipeline] httpRequest 00:00:15.095 [Pipeline] echo 00:00:15.096 Sorcerer 10.211.164.20 is alive 00:00:15.105 [Pipeline] retry 00:00:15.107 [Pipeline] { 00:00:15.119 [Pipeline] httpRequest 00:00:15.123 HttpMethod: GET 00:00:15.123 URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:15.124 Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:15.148 Response Code: HTTP/1.1 200 OK 00:00:15.149 Success: Status code 200 is in the accepted range: 200,404 00:00:15.149 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:26.649 [Pipeline] } 00:01:26.667 [Pipeline] // retry 00:01:26.676 [Pipeline] sh 00:01:26.958 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:29.500 [Pipeline] sh 00:01:29.783 + git -C spdk log --oneline -n5 00:01:29.783 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:29.783 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:29.783 4bcab9fb9 correct kick for CQ full case 00:01:29.783 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:29.783 318515b44 nvme/perf: interrupt mode support for pcie controller 00:01:29.794 [Pipeline] } 00:01:29.808 [Pipeline] // stage 00:01:29.817 [Pipeline] stage 00:01:29.819 [Pipeline] { (Prepare) 00:01:29.835 [Pipeline] writeFile 00:01:29.853 [Pipeline] sh 00:01:30.133 + logger -p user.info -t JENKINS-CI 00:01:30.144 [Pipeline] sh 00:01:30.467 + logger -p user.info -t JENKINS-CI 00:01:30.477 [Pipeline] sh 00:01:30.753 + cat autorun-spdk.conf 00:01:30.753 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.753 SPDK_TEST_NVMF=1 00:01:30.753 SPDK_TEST_NVME_CLI=1 00:01:30.753 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.753 SPDK_TEST_NVMF_NICS=e810 00:01:30.753 SPDK_RUN_ASAN=1 00:01:30.753 SPDK_RUN_UBSAN=1 00:01:30.753 NET_TYPE=phy 00:01:30.760 RUN_NIGHTLY=1 00:01:30.765 [Pipeline] readFile 00:01:30.788 [Pipeline] withEnv 00:01:30.790 [Pipeline] { 00:01:30.801 [Pipeline] sh 00:01:31.075 + set -ex 00:01:31.075 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:31.075 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:31.075 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.075 ++ SPDK_TEST_NVMF=1 00:01:31.075 ++ SPDK_TEST_NVME_CLI=1 00:01:31.075 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:31.075 ++ SPDK_TEST_NVMF_NICS=e810 00:01:31.075 ++ SPDK_RUN_ASAN=1 00:01:31.075 ++ SPDK_RUN_UBSAN=1 00:01:31.075 ++ NET_TYPE=phy 00:01:31.075 ++ RUN_NIGHTLY=1 00:01:31.075 + case $SPDK_TEST_NVMF_NICS in 00:01:31.075 + DRIVERS=ice 00:01:31.075 + [[ tcp == \r\d\m\a ]] 00:01:31.075 + [[ -n ice ]] 00:01:31.075 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:31.075 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:31.075 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:31.075 rmmod: ERROR: Module irdma is not currently loaded 00:01:31.075 rmmod: ERROR: Module i40iw is not currently loaded 00:01:31.075 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:31.075 + true 00:01:31.075 + for D in $DRIVERS 00:01:31.075 + sudo modprobe ice 00:01:31.075 + exit 0 00:01:31.081 [Pipeline] } 00:01:31.091 [Pipeline] // withEnv 00:01:31.094 [Pipeline] } 00:01:31.104 [Pipeline] // stage 00:01:31.109 [Pipeline] catchError 00:01:31.110 [Pipeline] { 00:01:31.119 [Pipeline] timeout 00:01:31.119 Timeout set to expire in 1 hr 0 min 00:01:31.120 [Pipeline] { 00:01:31.131 [Pipeline] stage 00:01:31.132 [Pipeline] { (Tests) 00:01:31.144 [Pipeline] sh 00:01:31.425 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:31.425 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:31.425 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:31.425 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:31.425 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:31.425 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:31.425 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:31.425 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:31.426 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:31.426 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:31.426 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:31.426 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:31.426 + source /etc/os-release 00:01:31.426 ++ NAME='Fedora Linux' 00:01:31.426 ++ VERSION='39 (Cloud Edition)' 00:01:31.426 ++ ID=fedora 00:01:31.426 ++ VERSION_ID=39 00:01:31.426 ++ VERSION_CODENAME= 00:01:31.426 ++ PLATFORM_ID=platform:f39 00:01:31.426 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:31.426 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:31.426 ++ LOGO=fedora-logo-icon 00:01:31.426 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:31.426 ++ HOME_URL=https://fedoraproject.org/ 00:01:31.426 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:31.426 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:31.426 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:31.426 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:31.426 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:31.426 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:31.426 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:31.426 ++ SUPPORT_END=2024-11-12 00:01:31.426 ++ VARIANT='Cloud Edition' 00:01:31.426 ++ VARIANT_ID=cloud 00:01:31.426 + uname -a 00:01:31.426 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:31.426 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:32.360 Hugepages 00:01:32.360 node hugesize free / total 00:01:32.360 node0 1048576kB 0 / 0 00:01:32.360 node0 2048kB 0 / 0 00:01:32.360 node1 1048576kB 0 / 0 00:01:32.360 node1 2048kB 0 / 0 00:01:32.360 00:01:32.360 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:32.360 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:32.360 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:32.360 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:32.618 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:32.618 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:32.618 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:32.618 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:32.619 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:32.619 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:32.619 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:32.619 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:32.619 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:32.619 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:32.619 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:32.619 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:32.619 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:32.619 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:32.619 + rm -f /tmp/spdk-ld-path 00:01:32.619 + source autorun-spdk.conf 00:01:32.619 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.619 ++ SPDK_TEST_NVMF=1 00:01:32.619 ++ SPDK_TEST_NVME_CLI=1 00:01:32.619 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.619 ++ SPDK_TEST_NVMF_NICS=e810 00:01:32.619 ++ SPDK_RUN_ASAN=1 00:01:32.619 ++ SPDK_RUN_UBSAN=1 00:01:32.619 ++ NET_TYPE=phy 00:01:32.619 ++ RUN_NIGHTLY=1 00:01:32.619 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:32.619 + [[ -n '' ]] 00:01:32.619 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:32.619 + for M in /var/spdk/build-*-manifest.txt 00:01:32.619 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:32.619 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:32.619 + for M in /var/spdk/build-*-manifest.txt 00:01:32.619 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:32.619 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:32.619 + for M in /var/spdk/build-*-manifest.txt 00:01:32.619 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:32.619 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:32.619 ++ uname 00:01:32.619 + [[ Linux == \L\i\n\u\x ]] 00:01:32.619 + sudo dmesg -T 00:01:32.619 + sudo dmesg --clear 00:01:32.619 + dmesg_pid=2742129 00:01:32.619 + [[ Fedora Linux == FreeBSD ]] 00:01:32.619 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:32.619 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:32.619 + sudo dmesg -Tw 00:01:32.619 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:32.619 + [[ -x /usr/src/fio-static/fio ]] 00:01:32.619 + export FIO_BIN=/usr/src/fio-static/fio 00:01:32.619 + FIO_BIN=/usr/src/fio-static/fio 00:01:32.619 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:32.619 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:32.619 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:32.619 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:32.619 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:32.619 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:32.619 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:32.619 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:32.619 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:32.619 02:21:41 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:32.619 02:21:41 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:32.619 02:21:41 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.619 02:21:41 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:32.619 02:21:41 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:32.619 02:21:41 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.619 02:21:41 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:32.619 02:21:41 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_ASAN=1 00:01:32.619 02:21:41 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:32.619 02:21:41 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:32.619 02:21:41 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=1 00:01:32.619 02:21:41 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:32.619 02:21:41 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:32.877 02:21:41 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:32.877 02:21:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:32.877 02:21:41 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:32.877 02:21:41 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:32.877 02:21:41 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:32.878 02:21:41 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:32.878 02:21:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.878 02:21:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.878 02:21:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.878 02:21:41 -- paths/export.sh@5 -- $ export PATH 00:01:32.878 02:21:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.878 02:21:41 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:32.878 02:21:41 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:32.878 02:21:41 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731806501.XXXXXX 00:01:32.878 02:21:41 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731806501.UWflvM 00:01:32.878 02:21:41 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:32.878 02:21:41 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:32.878 02:21:41 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:32.878 02:21:41 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:32.878 02:21:41 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:32.878 02:21:41 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:32.878 02:21:41 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:32.878 02:21:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.878 02:21:41 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:32.878 02:21:41 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:32.878 02:21:41 -- pm/common@17 -- $ local monitor 00:01:32.878 02:21:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.878 02:21:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.878 02:21:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.878 02:21:41 -- pm/common@21 -- $ date +%s 00:01:32.878 02:21:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.878 02:21:41 -- pm/common@21 -- $ date +%s 00:01:32.878 02:21:41 -- pm/common@25 -- $ sleep 1 00:01:32.878 02:21:41 -- pm/common@21 -- $ date +%s 00:01:32.878 02:21:41 -- pm/common@21 -- $ date +%s 00:01:32.878 02:21:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731806501 00:01:32.878 02:21:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731806501 00:01:32.878 02:21:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731806501 00:01:32.878 02:21:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731806501 00:01:32.878 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731806501_collect-cpu-load.pm.log 00:01:32.878 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731806501_collect-vmstat.pm.log 00:01:32.878 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731806501_collect-cpu-temp.pm.log 00:01:32.878 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731806501_collect-bmc-pm.bmc.pm.log 00:01:33.816 02:21:42 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:33.816 02:21:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:33.816 02:21:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:33.816 02:21:42 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:33.816 02:21:42 -- spdk/autobuild.sh@16 -- $ date -u 00:01:33.816 Sun Nov 17 01:21:42 AM UTC 2024 00:01:33.816 02:21:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:33.816 v25.01-pre-189-g83e8405e4 00:01:33.816 02:21:42 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:33.816 02:21:42 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:33.816 02:21:42 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:33.816 02:21:42 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:33.816 02:21:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.816 ************************************ 00:01:33.816 START TEST asan 00:01:33.816 ************************************ 00:01:33.816 02:21:42 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:33.816 using asan 00:01:33.816 00:01:33.816 real 0m0.000s 00:01:33.816 user 0m0.000s 00:01:33.816 sys 0m0.000s 00:01:33.816 02:21:42 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:33.816 02:21:42 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:33.816 ************************************ 00:01:33.816 END TEST asan 00:01:33.816 ************************************ 00:01:33.816 02:21:42 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:33.816 02:21:42 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:33.816 02:21:42 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:33.816 02:21:42 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:33.816 02:21:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.816 ************************************ 00:01:33.816 START TEST ubsan 00:01:33.816 ************************************ 00:01:33.816 02:21:42 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:33.816 using ubsan 00:01:33.816 00:01:33.816 real 0m0.000s 00:01:33.816 user 0m0.000s 00:01:33.816 sys 0m0.000s 00:01:33.816 02:21:42 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:33.816 02:21:42 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:33.816 ************************************ 00:01:33.816 END TEST ubsan 00:01:33.816 ************************************ 00:01:33.816 02:21:42 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:33.816 02:21:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:33.816 02:21:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:33.816 02:21:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:33.816 02:21:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:33.816 02:21:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:33.816 02:21:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:33.816 02:21:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:33.816 02:21:42 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:33.816 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:33.816 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:34.382 Using 'verbs' RDMA provider 00:01:44.918 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:54.891 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:54.891 Creating mk/config.mk...done. 00:01:54.891 Creating mk/cc.flags.mk...done. 00:01:54.891 Type 'make' to build. 00:01:54.891 02:22:02 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:01:54.891 02:22:02 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:54.891 02:22:02 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:54.891 02:22:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.891 ************************************ 00:01:54.891 START TEST make 00:01:54.891 ************************************ 00:01:54.891 02:22:02 make -- common/autotest_common.sh@1129 -- $ make -j48 00:01:54.891 make[1]: Nothing to be done for 'all'. 00:02:04.899 The Meson build system 00:02:04.899 Version: 1.5.0 00:02:04.899 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:04.899 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:04.899 Build type: native build 00:02:04.899 Program cat found: YES (/usr/bin/cat) 00:02:04.899 Project name: DPDK 00:02:04.899 Project version: 24.03.0 00:02:04.899 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:04.899 C linker for the host machine: cc ld.bfd 2.40-14 00:02:04.899 Host machine cpu family: x86_64 00:02:04.899 Host machine cpu: x86_64 00:02:04.899 Message: ## Building in Developer Mode ## 00:02:04.899 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:04.899 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:04.899 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:04.899 Program python3 found: YES (/usr/bin/python3) 00:02:04.899 Program cat found: YES (/usr/bin/cat) 00:02:04.899 Compiler for C supports arguments -march=native: YES 00:02:04.899 Checking for size of "void *" : 8 00:02:04.899 Checking for size of "void *" : 8 (cached) 00:02:04.899 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:04.899 Library m found: YES 00:02:04.899 Library numa found: YES 00:02:04.899 Has header "numaif.h" : YES 00:02:04.899 Library fdt found: NO 00:02:04.899 Library execinfo found: NO 00:02:04.899 Has header "execinfo.h" : YES 00:02:04.899 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:04.899 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:04.899 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:04.899 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:04.899 Run-time dependency openssl found: YES 3.1.1 00:02:04.899 Run-time dependency libpcap found: YES 1.10.4 00:02:04.899 Has header "pcap.h" with dependency libpcap: YES 00:02:04.899 Compiler for C supports arguments -Wcast-qual: YES 00:02:04.899 Compiler for C supports arguments -Wdeprecated: YES 00:02:04.899 Compiler for C supports arguments -Wformat: YES 00:02:04.899 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:04.899 Compiler for C supports arguments -Wformat-security: NO 00:02:04.899 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:04.899 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:04.899 Compiler for C supports arguments -Wnested-externs: YES 00:02:04.899 Compiler for C supports arguments -Wold-style-definition: YES 00:02:04.899 Compiler for C supports arguments -Wpointer-arith: YES 00:02:04.899 Compiler for C supports arguments -Wsign-compare: YES 00:02:04.899 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:04.899 Compiler for C supports arguments -Wundef: YES 00:02:04.899 Compiler for C supports arguments -Wwrite-strings: YES 00:02:04.899 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:04.899 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:04.899 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:04.899 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:04.899 Program objdump found: YES (/usr/bin/objdump) 00:02:04.899 Compiler for C supports arguments -mavx512f: YES 00:02:04.899 Checking if "AVX512 checking" compiles: YES 00:02:04.899 Fetching value of define "__SSE4_2__" : 1 00:02:04.899 Fetching value of define "__AES__" : 1 00:02:04.899 Fetching value of define "__AVX__" : 1 00:02:04.899 Fetching value of define "__AVX2__" : (undefined) 00:02:04.900 Fetching value of define "__AVX512BW__" : (undefined) 00:02:04.900 Fetching value of define "__AVX512CD__" : (undefined) 00:02:04.900 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:04.900 Fetching value of define "__AVX512F__" : (undefined) 00:02:04.900 Fetching value of define "__AVX512VL__" : (undefined) 00:02:04.900 Fetching value of define "__PCLMUL__" : 1 00:02:04.900 Fetching value of define "__RDRND__" : 1 00:02:04.900 Fetching value of define "__RDSEED__" : (undefined) 00:02:04.900 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:04.900 Fetching value of define "__znver1__" : (undefined) 00:02:04.900 Fetching value of define "__znver2__" : (undefined) 00:02:04.900 Fetching value of define "__znver3__" : (undefined) 00:02:04.900 Fetching value of define "__znver4__" : (undefined) 00:02:04.900 Library asan found: YES 00:02:04.900 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:04.900 Message: lib/log: Defining dependency "log" 00:02:04.900 Message: lib/kvargs: Defining dependency "kvargs" 00:02:04.900 Message: lib/telemetry: Defining dependency "telemetry" 00:02:04.900 Library rt found: YES 00:02:04.900 Checking for function "getentropy" : NO 00:02:04.900 Message: lib/eal: Defining dependency "eal" 00:02:04.900 Message: lib/ring: Defining dependency "ring" 00:02:04.900 Message: lib/rcu: Defining dependency "rcu" 00:02:04.900 Message: lib/mempool: Defining dependency "mempool" 00:02:04.900 Message: lib/mbuf: Defining dependency "mbuf" 00:02:04.900 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:04.900 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:04.900 Compiler for C supports arguments -mpclmul: YES 00:02:04.900 Compiler for C supports arguments -maes: YES 00:02:04.900 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:04.900 Compiler for C supports arguments -mavx512bw: YES 00:02:04.900 Compiler for C supports arguments -mavx512dq: YES 00:02:04.900 Compiler for C supports arguments -mavx512vl: YES 00:02:04.900 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:04.900 Compiler for C supports arguments -mavx2: YES 00:02:04.900 Compiler for C supports arguments -mavx: YES 00:02:04.900 Message: lib/net: Defining dependency "net" 00:02:04.900 Message: lib/meter: Defining dependency "meter" 00:02:04.900 Message: lib/ethdev: Defining dependency "ethdev" 00:02:04.900 Message: lib/pci: Defining dependency "pci" 00:02:04.900 Message: lib/cmdline: Defining dependency "cmdline" 00:02:04.900 Message: lib/hash: Defining dependency "hash" 00:02:04.900 Message: lib/timer: Defining dependency "timer" 00:02:04.900 Message: lib/compressdev: Defining dependency "compressdev" 00:02:04.900 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:04.900 Message: lib/dmadev: Defining dependency "dmadev" 00:02:04.900 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:04.900 Message: lib/power: Defining dependency "power" 00:02:04.900 Message: lib/reorder: Defining dependency "reorder" 00:02:04.900 Message: lib/security: Defining dependency "security" 00:02:04.900 Has header "linux/userfaultfd.h" : YES 00:02:04.900 Has header "linux/vduse.h" : YES 00:02:04.900 Message: lib/vhost: Defining dependency "vhost" 00:02:04.900 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:04.900 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:04.900 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:04.900 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:04.900 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:04.900 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:04.900 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:04.900 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:04.900 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:04.900 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:04.900 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:04.900 Configuring doxy-api-html.conf using configuration 00:02:04.900 Configuring doxy-api-man.conf using configuration 00:02:04.900 Program mandb found: YES (/usr/bin/mandb) 00:02:04.900 Program sphinx-build found: NO 00:02:04.900 Configuring rte_build_config.h using configuration 00:02:04.900 Message: 00:02:04.900 ================= 00:02:04.900 Applications Enabled 00:02:04.900 ================= 00:02:04.900 00:02:04.900 apps: 00:02:04.900 00:02:04.900 00:02:04.900 Message: 00:02:04.900 ================= 00:02:04.900 Libraries Enabled 00:02:04.900 ================= 00:02:04.900 00:02:04.900 libs: 00:02:04.900 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:04.900 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:04.900 cryptodev, dmadev, power, reorder, security, vhost, 00:02:04.900 00:02:04.900 Message: 00:02:04.900 =============== 00:02:04.900 Drivers Enabled 00:02:04.900 =============== 00:02:04.900 00:02:04.900 common: 00:02:04.900 00:02:04.900 bus: 00:02:04.900 pci, vdev, 00:02:04.900 mempool: 00:02:04.900 ring, 00:02:04.900 dma: 00:02:04.900 00:02:04.900 net: 00:02:04.900 00:02:04.900 crypto: 00:02:04.900 00:02:04.900 compress: 00:02:04.900 00:02:04.900 vdpa: 00:02:04.900 00:02:04.900 00:02:04.900 Message: 00:02:04.900 ================= 00:02:04.900 Content Skipped 00:02:04.900 ================= 00:02:04.900 00:02:04.900 apps: 00:02:04.900 dumpcap: explicitly disabled via build config 00:02:04.900 graph: explicitly disabled via build config 00:02:04.900 pdump: explicitly disabled via build config 00:02:04.900 proc-info: explicitly disabled via build config 00:02:04.900 test-acl: explicitly disabled via build config 00:02:04.900 test-bbdev: explicitly disabled via build config 00:02:04.900 test-cmdline: explicitly disabled via build config 00:02:04.900 test-compress-perf: explicitly disabled via build config 00:02:04.900 test-crypto-perf: explicitly disabled via build config 00:02:04.900 test-dma-perf: explicitly disabled via build config 00:02:04.900 test-eventdev: explicitly disabled via build config 00:02:04.900 test-fib: explicitly disabled via build config 00:02:04.900 test-flow-perf: explicitly disabled via build config 00:02:04.900 test-gpudev: explicitly disabled via build config 00:02:04.900 test-mldev: explicitly disabled via build config 00:02:04.900 test-pipeline: explicitly disabled via build config 00:02:04.900 test-pmd: explicitly disabled via build config 00:02:04.900 test-regex: explicitly disabled via build config 00:02:04.900 test-sad: explicitly disabled via build config 00:02:04.900 test-security-perf: explicitly disabled via build config 00:02:04.900 00:02:04.900 libs: 00:02:04.900 argparse: explicitly disabled via build config 00:02:04.900 metrics: explicitly disabled via build config 00:02:04.900 acl: explicitly disabled via build config 00:02:04.900 bbdev: explicitly disabled via build config 00:02:04.900 bitratestats: explicitly disabled via build config 00:02:04.900 bpf: explicitly disabled via build config 00:02:04.900 cfgfile: explicitly disabled via build config 00:02:04.900 distributor: explicitly disabled via build config 00:02:04.900 efd: explicitly disabled via build config 00:02:04.900 eventdev: explicitly disabled via build config 00:02:04.900 dispatcher: explicitly disabled via build config 00:02:04.900 gpudev: explicitly disabled via build config 00:02:04.900 gro: explicitly disabled via build config 00:02:04.900 gso: explicitly disabled via build config 00:02:04.900 ip_frag: explicitly disabled via build config 00:02:04.900 jobstats: explicitly disabled via build config 00:02:04.900 latencystats: explicitly disabled via build config 00:02:04.900 lpm: explicitly disabled via build config 00:02:04.900 member: explicitly disabled via build config 00:02:04.900 pcapng: explicitly disabled via build config 00:02:04.900 rawdev: explicitly disabled via build config 00:02:04.900 regexdev: explicitly disabled via build config 00:02:04.900 mldev: explicitly disabled via build config 00:02:04.900 rib: explicitly disabled via build config 00:02:04.900 sched: explicitly disabled via build config 00:02:04.900 stack: explicitly disabled via build config 00:02:04.900 ipsec: explicitly disabled via build config 00:02:04.900 pdcp: explicitly disabled via build config 00:02:04.900 fib: explicitly disabled via build config 00:02:04.900 port: explicitly disabled via build config 00:02:04.900 pdump: explicitly disabled via build config 00:02:04.900 table: explicitly disabled via build config 00:02:04.900 pipeline: explicitly disabled via build config 00:02:04.900 graph: explicitly disabled via build config 00:02:04.900 node: explicitly disabled via build config 00:02:04.900 00:02:04.900 drivers: 00:02:04.900 common/cpt: not in enabled drivers build config 00:02:04.900 common/dpaax: not in enabled drivers build config 00:02:04.900 common/iavf: not in enabled drivers build config 00:02:04.900 common/idpf: not in enabled drivers build config 00:02:04.900 common/ionic: not in enabled drivers build config 00:02:04.900 common/mvep: not in enabled drivers build config 00:02:04.900 common/octeontx: not in enabled drivers build config 00:02:04.900 bus/auxiliary: not in enabled drivers build config 00:02:04.900 bus/cdx: not in enabled drivers build config 00:02:04.900 bus/dpaa: not in enabled drivers build config 00:02:04.900 bus/fslmc: not in enabled drivers build config 00:02:04.900 bus/ifpga: not in enabled drivers build config 00:02:04.900 bus/platform: not in enabled drivers build config 00:02:04.900 bus/uacce: not in enabled drivers build config 00:02:04.900 bus/vmbus: not in enabled drivers build config 00:02:04.900 common/cnxk: not in enabled drivers build config 00:02:04.900 common/mlx5: not in enabled drivers build config 00:02:04.900 common/nfp: not in enabled drivers build config 00:02:04.900 common/nitrox: not in enabled drivers build config 00:02:04.900 common/qat: not in enabled drivers build config 00:02:04.900 common/sfc_efx: not in enabled drivers build config 00:02:04.900 mempool/bucket: not in enabled drivers build config 00:02:04.900 mempool/cnxk: not in enabled drivers build config 00:02:04.900 mempool/dpaa: not in enabled drivers build config 00:02:04.900 mempool/dpaa2: not in enabled drivers build config 00:02:04.900 mempool/octeontx: not in enabled drivers build config 00:02:04.900 mempool/stack: not in enabled drivers build config 00:02:04.900 dma/cnxk: not in enabled drivers build config 00:02:04.901 dma/dpaa: not in enabled drivers build config 00:02:04.901 dma/dpaa2: not in enabled drivers build config 00:02:04.901 dma/hisilicon: not in enabled drivers build config 00:02:04.901 dma/idxd: not in enabled drivers build config 00:02:04.901 dma/ioat: not in enabled drivers build config 00:02:04.901 dma/skeleton: not in enabled drivers build config 00:02:04.901 net/af_packet: not in enabled drivers build config 00:02:04.901 net/af_xdp: not in enabled drivers build config 00:02:04.901 net/ark: not in enabled drivers build config 00:02:04.901 net/atlantic: not in enabled drivers build config 00:02:04.901 net/avp: not in enabled drivers build config 00:02:04.901 net/axgbe: not in enabled drivers build config 00:02:04.901 net/bnx2x: not in enabled drivers build config 00:02:04.901 net/bnxt: not in enabled drivers build config 00:02:04.901 net/bonding: not in enabled drivers build config 00:02:04.901 net/cnxk: not in enabled drivers build config 00:02:04.901 net/cpfl: not in enabled drivers build config 00:02:04.901 net/cxgbe: not in enabled drivers build config 00:02:04.901 net/dpaa: not in enabled drivers build config 00:02:04.901 net/dpaa2: not in enabled drivers build config 00:02:04.901 net/e1000: not in enabled drivers build config 00:02:04.901 net/ena: not in enabled drivers build config 00:02:04.901 net/enetc: not in enabled drivers build config 00:02:04.901 net/enetfec: not in enabled drivers build config 00:02:04.901 net/enic: not in enabled drivers build config 00:02:04.901 net/failsafe: not in enabled drivers build config 00:02:04.901 net/fm10k: not in enabled drivers build config 00:02:04.901 net/gve: not in enabled drivers build config 00:02:04.901 net/hinic: not in enabled drivers build config 00:02:04.901 net/hns3: not in enabled drivers build config 00:02:04.901 net/i40e: not in enabled drivers build config 00:02:04.901 net/iavf: not in enabled drivers build config 00:02:04.901 net/ice: not in enabled drivers build config 00:02:04.901 net/idpf: not in enabled drivers build config 00:02:04.901 net/igc: not in enabled drivers build config 00:02:04.901 net/ionic: not in enabled drivers build config 00:02:04.901 net/ipn3ke: not in enabled drivers build config 00:02:04.901 net/ixgbe: not in enabled drivers build config 00:02:04.901 net/mana: not in enabled drivers build config 00:02:04.901 net/memif: not in enabled drivers build config 00:02:04.901 net/mlx4: not in enabled drivers build config 00:02:04.901 net/mlx5: not in enabled drivers build config 00:02:04.901 net/mvneta: not in enabled drivers build config 00:02:04.901 net/mvpp2: not in enabled drivers build config 00:02:04.901 net/netvsc: not in enabled drivers build config 00:02:04.901 net/nfb: not in enabled drivers build config 00:02:04.901 net/nfp: not in enabled drivers build config 00:02:04.901 net/ngbe: not in enabled drivers build config 00:02:04.901 net/null: not in enabled drivers build config 00:02:04.901 net/octeontx: not in enabled drivers build config 00:02:04.901 net/octeon_ep: not in enabled drivers build config 00:02:04.901 net/pcap: not in enabled drivers build config 00:02:04.901 net/pfe: not in enabled drivers build config 00:02:04.901 net/qede: not in enabled drivers build config 00:02:04.901 net/ring: not in enabled drivers build config 00:02:04.901 net/sfc: not in enabled drivers build config 00:02:04.901 net/softnic: not in enabled drivers build config 00:02:04.901 net/tap: not in enabled drivers build config 00:02:04.901 net/thunderx: not in enabled drivers build config 00:02:04.901 net/txgbe: not in enabled drivers build config 00:02:04.901 net/vdev_netvsc: not in enabled drivers build config 00:02:04.901 net/vhost: not in enabled drivers build config 00:02:04.901 net/virtio: not in enabled drivers build config 00:02:04.901 net/vmxnet3: not in enabled drivers build config 00:02:04.901 raw/*: missing internal dependency, "rawdev" 00:02:04.901 crypto/armv8: not in enabled drivers build config 00:02:04.901 crypto/bcmfs: not in enabled drivers build config 00:02:04.901 crypto/caam_jr: not in enabled drivers build config 00:02:04.901 crypto/ccp: not in enabled drivers build config 00:02:04.901 crypto/cnxk: not in enabled drivers build config 00:02:04.901 crypto/dpaa_sec: not in enabled drivers build config 00:02:04.901 crypto/dpaa2_sec: not in enabled drivers build config 00:02:04.901 crypto/ipsec_mb: not in enabled drivers build config 00:02:04.901 crypto/mlx5: not in enabled drivers build config 00:02:04.901 crypto/mvsam: not in enabled drivers build config 00:02:04.901 crypto/nitrox: not in enabled drivers build config 00:02:04.901 crypto/null: not in enabled drivers build config 00:02:04.901 crypto/octeontx: not in enabled drivers build config 00:02:04.901 crypto/openssl: not in enabled drivers build config 00:02:04.901 crypto/scheduler: not in enabled drivers build config 00:02:04.901 crypto/uadk: not in enabled drivers build config 00:02:04.901 crypto/virtio: not in enabled drivers build config 00:02:04.901 compress/isal: not in enabled drivers build config 00:02:04.901 compress/mlx5: not in enabled drivers build config 00:02:04.901 compress/nitrox: not in enabled drivers build config 00:02:04.901 compress/octeontx: not in enabled drivers build config 00:02:04.901 compress/zlib: not in enabled drivers build config 00:02:04.901 regex/*: missing internal dependency, "regexdev" 00:02:04.901 ml/*: missing internal dependency, "mldev" 00:02:04.901 vdpa/ifc: not in enabled drivers build config 00:02:04.901 vdpa/mlx5: not in enabled drivers build config 00:02:04.901 vdpa/nfp: not in enabled drivers build config 00:02:04.901 vdpa/sfc: not in enabled drivers build config 00:02:04.901 event/*: missing internal dependency, "eventdev" 00:02:04.901 baseband/*: missing internal dependency, "bbdev" 00:02:04.901 gpu/*: missing internal dependency, "gpudev" 00:02:04.901 00:02:04.901 00:02:04.901 Build targets in project: 85 00:02:04.901 00:02:04.901 DPDK 24.03.0 00:02:04.901 00:02:04.901 User defined options 00:02:04.901 buildtype : debug 00:02:04.901 default_library : shared 00:02:04.901 libdir : lib 00:02:04.901 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:04.901 b_sanitize : address 00:02:04.901 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:04.901 c_link_args : 00:02:04.901 cpu_instruction_set: native 00:02:04.901 disable_apps : test-acl,graph,test-dma-perf,test-gpudev,test-crypto-perf,test,test-security-perf,test-mldev,proc-info,test-pmd,test-pipeline,test-eventdev,test-cmdline,test-fib,pdump,test-flow-perf,test-bbdev,test-regex,test-sad,dumpcap,test-compress-perf 00:02:04.901 disable_libs : acl,bitratestats,graph,bbdev,jobstats,ipsec,gso,table,rib,node,mldev,sched,ip_frag,cfgfile,port,pcapng,pdcp,argparse,stack,eventdev,regexdev,distributor,gro,efd,pipeline,bpf,dispatcher,lpm,metrics,latencystats,pdump,gpudev,member,fib,rawdev 00:02:04.901 enable_docs : false 00:02:04.901 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:04.901 enable_kmods : false 00:02:04.901 max_lcores : 128 00:02:04.901 tests : false 00:02:04.901 00:02:04.901 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:04.901 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:04.901 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:04.901 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:04.901 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:04.901 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:04.901 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:04.901 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:04.901 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:04.901 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:04.901 [9/268] Linking static target lib/librte_kvargs.a 00:02:04.901 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:04.901 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:04.901 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:04.901 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:04.901 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:04.901 [15/268] Linking static target lib/librte_log.a 00:02:04.901 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:05.160 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.420 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:05.420 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:05.420 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:05.420 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:05.420 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:05.420 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:05.420 [24/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:05.420 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:05.420 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:05.420 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:05.420 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:05.420 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:05.420 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:05.420 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:05.420 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:05.420 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:05.420 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:05.420 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:05.420 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:05.420 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:05.420 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:05.420 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:05.420 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:05.420 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:05.420 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:05.420 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:05.420 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:05.420 [45/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:05.420 [46/268] Linking static target lib/librte_telemetry.a 00:02:05.420 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:05.420 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:05.420 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:05.420 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:05.681 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:05.681 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:05.681 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:05.681 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:05.681 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:05.681 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:05.681 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:05.681 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:05.681 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:05.681 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:05.681 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:05.681 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:05.945 [63/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.945 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:05.945 [65/268] Linking target lib/librte_log.so.24.1 00:02:05.945 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:06.206 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:06.206 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:06.206 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:06.206 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:06.206 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:06.472 [72/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:06.472 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:06.472 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:06.472 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:06.472 [76/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:06.472 [77/268] Linking static target lib/librte_pci.a 00:02:06.472 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:06.472 [79/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:06.472 [80/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:06.472 [81/268] Linking target lib/librte_kvargs.so.24.1 00:02:06.472 [82/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:06.472 [83/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:06.472 [84/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:06.472 [85/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:06.472 [86/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:06.472 [87/268] Linking static target lib/librte_meter.a 00:02:06.472 [88/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:06.472 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:06.472 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:06.472 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:06.472 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:06.472 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:06.472 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:06.472 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:06.472 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:06.472 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:06.472 [98/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.472 [99/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:06.472 [100/268] Linking static target lib/librte_ring.a 00:02:06.734 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:06.734 [102/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:06.734 [103/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:06.734 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:06.734 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:06.734 [106/268] Linking target lib/librte_telemetry.so.24.1 00:02:06.734 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:06.734 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:06.734 [109/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:06.734 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:06.734 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:06.734 [112/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:06.734 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:06.994 [114/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:06.994 [115/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.994 [116/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:06.994 [117/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:06.994 [118/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:06.994 [119/268] Linking static target lib/librte_mempool.a 00:02:06.994 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:06.994 [121/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:06.994 [122/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.994 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:06.994 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:06.994 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:06.994 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:06.994 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:06.994 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:06.994 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:07.256 [130/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:07.256 [131/268] Linking static target lib/librte_rcu.a 00:02:07.256 [132/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:07.256 [133/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.256 [134/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:07.519 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:07.519 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:07.519 [137/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:07.519 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:07.519 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:07.519 [140/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:07.519 [141/268] Linking static target lib/librte_cmdline.a 00:02:07.519 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:07.519 [143/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:07.519 [144/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:07.519 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:07.519 [146/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:07.519 [147/268] Linking static target lib/librte_eal.a 00:02:07.519 [148/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:07.778 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:07.778 [150/268] Linking static target lib/librte_timer.a 00:02:07.778 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:07.778 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:07.778 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:07.778 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:07.778 [155/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:07.778 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:07.778 [157/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.036 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:08.037 [159/268] Linking static target lib/librte_dmadev.a 00:02:08.037 [160/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.037 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.037 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:08.037 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:08.295 [164/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:08.295 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:08.295 [166/268] Linking static target lib/librte_net.a 00:02:08.295 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:08.295 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:08.295 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:08.295 [170/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:08.296 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:08.296 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:08.296 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:08.554 [174/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:08.554 [175/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:08.554 [176/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.554 [177/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:08.554 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.554 [179/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:08.554 [180/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.554 [181/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:08.554 [182/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:08.554 [183/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:08.554 [184/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:08.554 [185/268] Linking static target lib/librte_power.a 00:02:08.554 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:08.554 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:08.812 [188/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:08.812 [189/268] Linking static target lib/librte_compressdev.a 00:02:08.812 [190/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:08.812 [191/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:08.813 [192/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:08.813 [193/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:08.813 [194/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:08.813 [195/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:08.813 [196/268] Linking static target drivers/librte_bus_vdev.a 00:02:08.813 [197/268] Linking static target lib/librte_hash.a 00:02:08.813 [198/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:08.813 [199/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:08.813 [200/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:08.813 [201/268] Linking static target drivers/librte_bus_pci.a 00:02:09.071 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:09.071 [203/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:09.071 [204/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:09.071 [205/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:09.071 [206/268] Linking static target drivers/librte_mempool_ring.a 00:02:09.071 [207/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.071 [208/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:09.071 [209/268] Linking static target lib/librte_reorder.a 00:02:09.071 [210/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.330 [211/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.330 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:09.330 [213/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.330 [214/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.330 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.896 [216/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:09.896 [217/268] Linking static target lib/librte_security.a 00:02:10.463 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.463 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:11.028 [220/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:11.028 [221/268] Linking static target lib/librte_mbuf.a 00:02:11.286 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:11.286 [223/268] Linking static target lib/librte_cryptodev.a 00:02:11.286 [224/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.853 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:12.112 [226/268] Linking static target lib/librte_ethdev.a 00:02:12.370 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.747 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.747 [229/268] Linking target lib/librte_eal.so.24.1 00:02:13.747 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:13.747 [231/268] Linking target lib/librte_pci.so.24.1 00:02:13.747 [232/268] Linking target lib/librte_ring.so.24.1 00:02:13.747 [233/268] Linking target lib/librte_meter.so.24.1 00:02:13.747 [234/268] Linking target lib/librte_timer.so.24.1 00:02:13.747 [235/268] Linking target lib/librte_dmadev.so.24.1 00:02:13.747 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:14.005 [237/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:14.005 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:14.005 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:14.005 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:14.005 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:14.005 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:14.005 [243/268] Linking target lib/librte_rcu.so.24.1 00:02:14.005 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:14.264 [245/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:14.264 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:14.264 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:14.264 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:14.264 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:14.522 [250/268] Linking target lib/librte_reorder.so.24.1 00:02:14.522 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:14.522 [252/268] Linking target lib/librte_net.so.24.1 00:02:14.522 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:14.522 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:14.522 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:14.522 [256/268] Linking target lib/librte_security.so.24.1 00:02:14.522 [257/268] Linking target lib/librte_hash.so.24.1 00:02:14.522 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:14.780 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:15.347 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:16.283 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.283 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:16.283 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:16.542 [264/268] Linking target lib/librte_power.so.24.1 00:02:43.081 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:43.081 [266/268] Linking static target lib/librte_vhost.a 00:02:43.081 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.081 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:43.081 INFO: autodetecting backend as ninja 00:02:43.081 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:43.081 CC lib/ut_mock/mock.o 00:02:43.081 CC lib/ut/ut.o 00:02:43.081 CC lib/log/log.o 00:02:43.081 CC lib/log/log_flags.o 00:02:43.081 CC lib/log/log_deprecated.o 00:02:43.081 LIB libspdk_ut_mock.a 00:02:43.081 LIB libspdk_ut.a 00:02:43.081 LIB libspdk_log.a 00:02:43.081 SO libspdk_ut_mock.so.6.0 00:02:43.081 SO libspdk_ut.so.2.0 00:02:43.081 SO libspdk_log.so.7.1 00:02:43.081 SYMLINK libspdk_ut.so 00:02:43.081 SYMLINK libspdk_ut_mock.so 00:02:43.081 SYMLINK libspdk_log.so 00:02:43.081 CC lib/ioat/ioat.o 00:02:43.081 CC lib/dma/dma.o 00:02:43.081 CXX lib/trace_parser/trace.o 00:02:43.081 CC lib/util/base64.o 00:02:43.081 CC lib/util/bit_array.o 00:02:43.081 CC lib/util/cpuset.o 00:02:43.081 CC lib/util/crc16.o 00:02:43.081 CC lib/util/crc32.o 00:02:43.081 CC lib/util/crc32c.o 00:02:43.081 CC lib/util/crc32_ieee.o 00:02:43.081 CC lib/util/crc64.o 00:02:43.081 CC lib/util/dif.o 00:02:43.081 CC lib/util/fd.o 00:02:43.081 CC lib/util/fd_group.o 00:02:43.081 CC lib/util/file.o 00:02:43.081 CC lib/util/hexlify.o 00:02:43.081 CC lib/util/math.o 00:02:43.081 CC lib/util/iov.o 00:02:43.081 CC lib/util/net.o 00:02:43.081 CC lib/util/pipe.o 00:02:43.081 CC lib/util/strerror_tls.o 00:02:43.081 CC lib/util/string.o 00:02:43.081 CC lib/util/uuid.o 00:02:43.081 CC lib/util/zipf.o 00:02:43.081 CC lib/util/xor.o 00:02:43.081 CC lib/util/md5.o 00:02:43.081 CC lib/vfio_user/host/vfio_user_pci.o 00:02:43.081 CC lib/vfio_user/host/vfio_user.o 00:02:43.081 LIB libspdk_dma.a 00:02:43.081 SO libspdk_dma.so.5.0 00:02:43.081 SYMLINK libspdk_dma.so 00:02:43.081 LIB libspdk_ioat.a 00:02:43.081 LIB libspdk_vfio_user.a 00:02:43.081 SO libspdk_ioat.so.7.0 00:02:43.081 SO libspdk_vfio_user.so.5.0 00:02:43.081 SYMLINK libspdk_ioat.so 00:02:43.081 SYMLINK libspdk_vfio_user.so 00:02:43.081 LIB libspdk_util.a 00:02:43.081 SO libspdk_util.so.10.1 00:02:43.081 SYMLINK libspdk_util.so 00:02:43.081 CC lib/conf/conf.o 00:02:43.081 CC lib/env_dpdk/env.o 00:02:43.081 CC lib/vmd/vmd.o 00:02:43.081 CC lib/rdma_utils/rdma_utils.o 00:02:43.081 CC lib/json/json_parse.o 00:02:43.081 CC lib/env_dpdk/memory.o 00:02:43.081 CC lib/idxd/idxd.o 00:02:43.081 CC lib/env_dpdk/pci.o 00:02:43.081 CC lib/json/json_util.o 00:02:43.081 CC lib/vmd/led.o 00:02:43.081 CC lib/idxd/idxd_user.o 00:02:43.081 CC lib/json/json_write.o 00:02:43.081 CC lib/env_dpdk/init.o 00:02:43.081 CC lib/idxd/idxd_kernel.o 00:02:43.081 CC lib/env_dpdk/threads.o 00:02:43.081 CC lib/env_dpdk/pci_ioat.o 00:02:43.081 CC lib/env_dpdk/pci_virtio.o 00:02:43.081 CC lib/env_dpdk/pci_vmd.o 00:02:43.081 CC lib/env_dpdk/pci_idxd.o 00:02:43.081 CC lib/env_dpdk/pci_event.o 00:02:43.081 LIB libspdk_trace_parser.a 00:02:43.081 CC lib/env_dpdk/sigbus_handler.o 00:02:43.081 CC lib/env_dpdk/pci_dpdk.o 00:02:43.081 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:43.081 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:43.081 SO libspdk_trace_parser.so.6.0 00:02:43.081 SYMLINK libspdk_trace_parser.so 00:02:43.339 LIB libspdk_conf.a 00:02:43.339 SO libspdk_conf.so.6.0 00:02:43.339 LIB libspdk_rdma_utils.a 00:02:43.339 SYMLINK libspdk_conf.so 00:02:43.339 LIB libspdk_json.a 00:02:43.339 SO libspdk_rdma_utils.so.1.0 00:02:43.339 SO libspdk_json.so.6.0 00:02:43.339 SYMLINK libspdk_rdma_utils.so 00:02:43.339 SYMLINK libspdk_json.so 00:02:43.598 CC lib/rdma_provider/common.o 00:02:43.598 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:43.598 CC lib/jsonrpc/jsonrpc_server.o 00:02:43.598 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:43.598 CC lib/jsonrpc/jsonrpc_client.o 00:02:43.598 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:43.855 LIB libspdk_rdma_provider.a 00:02:43.855 LIB libspdk_idxd.a 00:02:43.855 SO libspdk_rdma_provider.so.7.0 00:02:43.855 SO libspdk_idxd.so.12.1 00:02:43.856 LIB libspdk_vmd.a 00:02:43.856 LIB libspdk_jsonrpc.a 00:02:43.856 SYMLINK libspdk_rdma_provider.so 00:02:43.856 SO libspdk_vmd.so.6.0 00:02:43.856 SYMLINK libspdk_idxd.so 00:02:43.856 SO libspdk_jsonrpc.so.6.0 00:02:43.856 SYMLINK libspdk_vmd.so 00:02:44.117 SYMLINK libspdk_jsonrpc.so 00:02:44.117 CC lib/rpc/rpc.o 00:02:44.426 LIB libspdk_rpc.a 00:02:44.426 SO libspdk_rpc.so.6.0 00:02:44.426 SYMLINK libspdk_rpc.so 00:02:44.729 CC lib/notify/notify.o 00:02:44.729 CC lib/trace/trace.o 00:02:44.729 CC lib/keyring/keyring.o 00:02:44.729 CC lib/notify/notify_rpc.o 00:02:44.729 CC lib/trace/trace_flags.o 00:02:44.729 CC lib/keyring/keyring_rpc.o 00:02:44.729 CC lib/trace/trace_rpc.o 00:02:44.729 LIB libspdk_notify.a 00:02:44.729 SO libspdk_notify.so.6.0 00:02:44.987 SYMLINK libspdk_notify.so 00:02:44.987 LIB libspdk_keyring.a 00:02:44.987 LIB libspdk_trace.a 00:02:44.987 SO libspdk_keyring.so.2.0 00:02:44.987 SO libspdk_trace.so.11.0 00:02:44.987 SYMLINK libspdk_keyring.so 00:02:44.987 SYMLINK libspdk_trace.so 00:02:45.245 CC lib/thread/thread.o 00:02:45.245 CC lib/thread/iobuf.o 00:02:45.245 CC lib/sock/sock.o 00:02:45.245 CC lib/sock/sock_rpc.o 00:02:45.810 LIB libspdk_sock.a 00:02:45.810 SO libspdk_sock.so.10.0 00:02:45.810 SYMLINK libspdk_sock.so 00:02:46.068 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:46.068 CC lib/nvme/nvme_ctrlr.o 00:02:46.068 CC lib/nvme/nvme_fabric.o 00:02:46.068 CC lib/nvme/nvme_ns_cmd.o 00:02:46.068 CC lib/nvme/nvme_ns.o 00:02:46.068 CC lib/nvme/nvme_pcie_common.o 00:02:46.068 CC lib/nvme/nvme_pcie.o 00:02:46.068 CC lib/nvme/nvme_qpair.o 00:02:46.068 CC lib/nvme/nvme.o 00:02:46.068 CC lib/nvme/nvme_quirks.o 00:02:46.068 CC lib/nvme/nvme_transport.o 00:02:46.068 CC lib/nvme/nvme_discovery.o 00:02:46.068 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:46.068 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:46.068 CC lib/nvme/nvme_tcp.o 00:02:46.068 CC lib/nvme/nvme_opal.o 00:02:46.068 CC lib/nvme/nvme_io_msg.o 00:02:46.068 CC lib/nvme/nvme_poll_group.o 00:02:46.068 CC lib/nvme/nvme_zns.o 00:02:46.068 CC lib/nvme/nvme_stubs.o 00:02:46.068 CC lib/nvme/nvme_auth.o 00:02:46.068 CC lib/nvme/nvme_cuse.o 00:02:46.068 CC lib/nvme/nvme_rdma.o 00:02:46.068 LIB libspdk_env_dpdk.a 00:02:46.068 SO libspdk_env_dpdk.so.15.1 00:02:46.327 SYMLINK libspdk_env_dpdk.so 00:02:47.262 LIB libspdk_thread.a 00:02:47.262 SO libspdk_thread.so.11.0 00:02:47.262 SYMLINK libspdk_thread.so 00:02:47.520 CC lib/virtio/virtio.o 00:02:47.520 CC lib/fsdev/fsdev.o 00:02:47.520 CC lib/accel/accel.o 00:02:47.520 CC lib/blob/blobstore.o 00:02:47.520 CC lib/init/json_config.o 00:02:47.520 CC lib/accel/accel_rpc.o 00:02:47.520 CC lib/fsdev/fsdev_io.o 00:02:47.520 CC lib/init/subsystem.o 00:02:47.520 CC lib/blob/request.o 00:02:47.520 CC lib/virtio/virtio_vhost_user.o 00:02:47.520 CC lib/fsdev/fsdev_rpc.o 00:02:47.520 CC lib/accel/accel_sw.o 00:02:47.520 CC lib/init/subsystem_rpc.o 00:02:47.520 CC lib/blob/zeroes.o 00:02:47.520 CC lib/virtio/virtio_vfio_user.o 00:02:47.520 CC lib/init/rpc.o 00:02:47.520 CC lib/virtio/virtio_pci.o 00:02:47.520 CC lib/blob/blob_bs_dev.o 00:02:47.778 LIB libspdk_init.a 00:02:47.778 SO libspdk_init.so.6.0 00:02:48.036 SYMLINK libspdk_init.so 00:02:48.036 LIB libspdk_virtio.a 00:02:48.036 SO libspdk_virtio.so.7.0 00:02:48.036 SYMLINK libspdk_virtio.so 00:02:48.036 CC lib/event/app.o 00:02:48.037 CC lib/event/reactor.o 00:02:48.037 CC lib/event/log_rpc.o 00:02:48.037 CC lib/event/app_rpc.o 00:02:48.037 CC lib/event/scheduler_static.o 00:02:48.294 LIB libspdk_fsdev.a 00:02:48.294 SO libspdk_fsdev.so.2.0 00:02:48.553 SYMLINK libspdk_fsdev.so 00:02:48.553 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:48.553 LIB libspdk_event.a 00:02:48.811 SO libspdk_event.so.14.0 00:02:48.811 SYMLINK libspdk_event.so 00:02:49.069 LIB libspdk_nvme.a 00:02:49.069 LIB libspdk_accel.a 00:02:49.069 SO libspdk_accel.so.16.0 00:02:49.069 SYMLINK libspdk_accel.so 00:02:49.069 SO libspdk_nvme.so.15.0 00:02:49.326 CC lib/bdev/bdev.o 00:02:49.326 CC lib/bdev/bdev_rpc.o 00:02:49.326 CC lib/bdev/bdev_zone.o 00:02:49.326 CC lib/bdev/part.o 00:02:49.326 CC lib/bdev/scsi_nvme.o 00:02:49.326 SYMLINK libspdk_nvme.so 00:02:49.326 LIB libspdk_fuse_dispatcher.a 00:02:49.584 SO libspdk_fuse_dispatcher.so.1.0 00:02:49.584 SYMLINK libspdk_fuse_dispatcher.so 00:02:52.113 LIB libspdk_blob.a 00:02:52.113 SO libspdk_blob.so.11.0 00:02:52.113 SYMLINK libspdk_blob.so 00:02:52.113 CC lib/lvol/lvol.o 00:02:52.113 CC lib/blobfs/blobfs.o 00:02:52.113 CC lib/blobfs/tree.o 00:02:53.048 LIB libspdk_blobfs.a 00:02:53.048 LIB libspdk_bdev.a 00:02:53.048 SO libspdk_blobfs.so.10.0 00:02:53.048 SO libspdk_bdev.so.17.0 00:02:53.048 SYMLINK libspdk_blobfs.so 00:02:53.311 LIB libspdk_lvol.a 00:02:53.311 SYMLINK libspdk_bdev.so 00:02:53.311 SO libspdk_lvol.so.10.0 00:02:53.311 SYMLINK libspdk_lvol.so 00:02:53.311 CC lib/scsi/dev.o 00:02:53.311 CC lib/scsi/lun.o 00:02:53.311 CC lib/nbd/nbd.o 00:02:53.311 CC lib/nvmf/ctrlr.o 00:02:53.311 CC lib/ublk/ublk.o 00:02:53.311 CC lib/scsi/port.o 00:02:53.311 CC lib/ftl/ftl_core.o 00:02:53.311 CC lib/nvmf/ctrlr_discovery.o 00:02:53.311 CC lib/nbd/nbd_rpc.o 00:02:53.311 CC lib/ftl/ftl_init.o 00:02:53.311 CC lib/scsi/scsi.o 00:02:53.311 CC lib/ublk/ublk_rpc.o 00:02:53.311 CC lib/nvmf/ctrlr_bdev.o 00:02:53.311 CC lib/scsi/scsi_bdev.o 00:02:53.311 CC lib/ftl/ftl_layout.o 00:02:53.311 CC lib/nvmf/subsystem.o 00:02:53.311 CC lib/nvmf/nvmf.o 00:02:53.311 CC lib/ftl/ftl_debug.o 00:02:53.311 CC lib/scsi/scsi_pr.o 00:02:53.311 CC lib/scsi/scsi_rpc.o 00:02:53.311 CC lib/nvmf/nvmf_rpc.o 00:02:53.311 CC lib/ftl/ftl_io.o 00:02:53.311 CC lib/ftl/ftl_sb.o 00:02:53.311 CC lib/nvmf/transport.o 00:02:53.311 CC lib/scsi/task.o 00:02:53.311 CC lib/nvmf/tcp.o 00:02:53.311 CC lib/ftl/ftl_l2p.o 00:02:53.311 CC lib/ftl/ftl_l2p_flat.o 00:02:53.311 CC lib/nvmf/stubs.o 00:02:53.311 CC lib/ftl/ftl_nv_cache.o 00:02:53.311 CC lib/nvmf/mdns_server.o 00:02:53.311 CC lib/ftl/ftl_band.o 00:02:53.311 CC lib/nvmf/rdma.o 00:02:53.311 CC lib/ftl/ftl_band_ops.o 00:02:53.311 CC lib/nvmf/auth.o 00:02:53.311 CC lib/ftl/ftl_writer.o 00:02:53.312 CC lib/ftl/ftl_rq.o 00:02:53.312 CC lib/ftl/ftl_reloc.o 00:02:53.312 CC lib/ftl/ftl_l2p_cache.o 00:02:53.312 CC lib/ftl/ftl_p2l.o 00:02:53.312 CC lib/ftl/ftl_p2l_log.o 00:02:53.312 CC lib/ftl/mngt/ftl_mngt.o 00:02:53.312 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:53.312 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:53.312 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:53.312 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:53.312 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:53.312 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:53.888 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:53.888 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:53.888 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:53.888 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:53.888 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:53.888 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:53.888 CC lib/ftl/utils/ftl_conf.o 00:02:53.888 CC lib/ftl/utils/ftl_md.o 00:02:53.888 CC lib/ftl/utils/ftl_mempool.o 00:02:53.888 CC lib/ftl/utils/ftl_bitmap.o 00:02:53.888 CC lib/ftl/utils/ftl_property.o 00:02:53.888 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:53.888 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:53.888 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:53.888 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:53.888 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:53.888 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:54.148 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:54.148 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:54.148 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:54.148 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:54.148 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:54.148 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:54.148 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:54.148 CC lib/ftl/base/ftl_base_dev.o 00:02:54.148 CC lib/ftl/base/ftl_base_bdev.o 00:02:54.148 CC lib/ftl/ftl_trace.o 00:02:54.407 LIB libspdk_nbd.a 00:02:54.407 SO libspdk_nbd.so.7.0 00:02:54.407 SYMLINK libspdk_nbd.so 00:02:54.666 LIB libspdk_scsi.a 00:02:54.666 SO libspdk_scsi.so.9.0 00:02:54.666 SYMLINK libspdk_scsi.so 00:02:54.666 LIB libspdk_ublk.a 00:02:54.924 SO libspdk_ublk.so.3.0 00:02:54.924 CC lib/vhost/vhost.o 00:02:54.924 CC lib/iscsi/conn.o 00:02:54.924 CC lib/iscsi/init_grp.o 00:02:54.924 CC lib/vhost/vhost_rpc.o 00:02:54.924 CC lib/iscsi/iscsi.o 00:02:54.924 CC lib/vhost/vhost_scsi.o 00:02:54.924 CC lib/iscsi/param.o 00:02:54.924 CC lib/vhost/vhost_blk.o 00:02:54.924 CC lib/iscsi/portal_grp.o 00:02:54.924 CC lib/vhost/rte_vhost_user.o 00:02:54.924 CC lib/iscsi/tgt_node.o 00:02:54.924 CC lib/iscsi/iscsi_subsystem.o 00:02:54.924 CC lib/iscsi/iscsi_rpc.o 00:02:54.924 CC lib/iscsi/task.o 00:02:54.924 SYMLINK libspdk_ublk.so 00:02:55.181 LIB libspdk_ftl.a 00:02:55.439 SO libspdk_ftl.so.9.0 00:02:55.698 SYMLINK libspdk_ftl.so 00:02:56.264 LIB libspdk_vhost.a 00:02:56.265 SO libspdk_vhost.so.8.0 00:02:56.265 SYMLINK libspdk_vhost.so 00:02:56.831 LIB libspdk_iscsi.a 00:02:56.831 SO libspdk_iscsi.so.8.0 00:02:56.831 LIB libspdk_nvmf.a 00:02:56.831 SO libspdk_nvmf.so.20.0 00:02:56.831 SYMLINK libspdk_iscsi.so 00:02:57.090 SYMLINK libspdk_nvmf.so 00:02:57.349 CC module/env_dpdk/env_dpdk_rpc.o 00:02:57.349 CC module/keyring/file/keyring.o 00:02:57.349 CC module/keyring/file/keyring_rpc.o 00:02:57.349 CC module/blob/bdev/blob_bdev.o 00:02:57.349 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:57.349 CC module/accel/iaa/accel_iaa.o 00:02:57.349 CC module/sock/posix/posix.o 00:02:57.349 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:57.349 CC module/scheduler/gscheduler/gscheduler.o 00:02:57.349 CC module/accel/iaa/accel_iaa_rpc.o 00:02:57.349 CC module/accel/error/accel_error.o 00:02:57.349 CC module/accel/ioat/accel_ioat.o 00:02:57.349 CC module/accel/error/accel_error_rpc.o 00:02:57.349 CC module/accel/ioat/accel_ioat_rpc.o 00:02:57.349 CC module/keyring/linux/keyring.o 00:02:57.349 CC module/accel/dsa/accel_dsa.o 00:02:57.349 CC module/keyring/linux/keyring_rpc.o 00:02:57.349 CC module/accel/dsa/accel_dsa_rpc.o 00:02:57.349 CC module/fsdev/aio/fsdev_aio.o 00:02:57.349 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:57.349 CC module/fsdev/aio/linux_aio_mgr.o 00:02:57.608 LIB libspdk_env_dpdk_rpc.a 00:02:57.608 SO libspdk_env_dpdk_rpc.so.6.0 00:02:57.608 SYMLINK libspdk_env_dpdk_rpc.so 00:02:57.608 LIB libspdk_keyring_file.a 00:02:57.608 LIB libspdk_keyring_linux.a 00:02:57.608 LIB libspdk_scheduler_gscheduler.a 00:02:57.608 LIB libspdk_scheduler_dpdk_governor.a 00:02:57.608 SO libspdk_keyring_file.so.2.0 00:02:57.608 SO libspdk_keyring_linux.so.1.0 00:02:57.608 SO libspdk_scheduler_gscheduler.so.4.0 00:02:57.608 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:57.608 LIB libspdk_accel_ioat.a 00:02:57.608 LIB libspdk_scheduler_dynamic.a 00:02:57.866 SYMLINK libspdk_keyring_file.so 00:02:57.866 SYMLINK libspdk_scheduler_gscheduler.so 00:02:57.866 LIB libspdk_accel_iaa.a 00:02:57.866 SO libspdk_accel_ioat.so.6.0 00:02:57.866 SYMLINK libspdk_keyring_linux.so 00:02:57.866 LIB libspdk_accel_error.a 00:02:57.866 SO libspdk_scheduler_dynamic.so.4.0 00:02:57.866 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:57.866 SO libspdk_accel_iaa.so.3.0 00:02:57.866 SO libspdk_accel_error.so.2.0 00:02:57.866 SYMLINK libspdk_accel_ioat.so 00:02:57.866 SYMLINK libspdk_scheduler_dynamic.so 00:02:57.866 SYMLINK libspdk_accel_iaa.so 00:02:57.866 SYMLINK libspdk_accel_error.so 00:02:57.866 LIB libspdk_blob_bdev.a 00:02:57.866 LIB libspdk_accel_dsa.a 00:02:57.866 SO libspdk_blob_bdev.so.11.0 00:02:57.866 SO libspdk_accel_dsa.so.5.0 00:02:57.866 SYMLINK libspdk_blob_bdev.so 00:02:57.866 SYMLINK libspdk_accel_dsa.so 00:02:58.126 CC module/bdev/passthru/vbdev_passthru.o 00:02:58.126 CC module/bdev/error/vbdev_error.o 00:02:58.126 CC module/bdev/lvol/vbdev_lvol.o 00:02:58.126 CC module/blobfs/bdev/blobfs_bdev.o 00:02:58.126 CC module/bdev/raid/bdev_raid.o 00:02:58.126 CC module/bdev/delay/vbdev_delay.o 00:02:58.126 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:58.126 CC module/bdev/error/vbdev_error_rpc.o 00:02:58.126 CC module/bdev/null/bdev_null.o 00:02:58.126 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:58.126 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:58.126 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:58.126 CC module/bdev/gpt/gpt.o 00:02:58.126 CC module/bdev/malloc/bdev_malloc.o 00:02:58.126 CC module/bdev/aio/bdev_aio.o 00:02:58.126 CC module/bdev/null/bdev_null_rpc.o 00:02:58.126 CC module/bdev/raid/bdev_raid_rpc.o 00:02:58.126 CC module/bdev/iscsi/bdev_iscsi.o 00:02:58.126 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:58.126 CC module/bdev/gpt/vbdev_gpt.o 00:02:58.126 CC module/bdev/nvme/bdev_nvme.o 00:02:58.126 CC module/bdev/aio/bdev_aio_rpc.o 00:02:58.126 CC module/bdev/raid/bdev_raid_sb.o 00:02:58.126 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:58.126 CC module/bdev/raid/raid1.o 00:02:58.126 CC module/bdev/raid/raid0.o 00:02:58.126 CC module/bdev/nvme/nvme_rpc.o 00:02:58.126 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:58.126 CC module/bdev/raid/concat.o 00:02:58.126 CC module/bdev/nvme/bdev_mdns_client.o 00:02:58.126 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:58.126 CC module/bdev/nvme/vbdev_opal.o 00:02:58.126 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:58.126 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:58.126 CC module/bdev/ftl/bdev_ftl.o 00:02:58.126 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:58.126 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:58.126 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:58.126 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:58.126 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:58.126 CC module/bdev/split/vbdev_split.o 00:02:58.126 CC module/bdev/split/vbdev_split_rpc.o 00:02:58.694 LIB libspdk_blobfs_bdev.a 00:02:58.694 SO libspdk_blobfs_bdev.so.6.0 00:02:58.694 LIB libspdk_bdev_error.a 00:02:58.694 LIB libspdk_fsdev_aio.a 00:02:58.694 SO libspdk_bdev_error.so.6.0 00:02:58.694 LIB libspdk_bdev_split.a 00:02:58.694 SO libspdk_fsdev_aio.so.1.0 00:02:58.694 SYMLINK libspdk_blobfs_bdev.so 00:02:58.694 SO libspdk_bdev_split.so.6.0 00:02:58.694 LIB libspdk_bdev_gpt.a 00:02:58.694 LIB libspdk_bdev_ftl.a 00:02:58.694 LIB libspdk_bdev_iscsi.a 00:02:58.694 SYMLINK libspdk_bdev_error.so 00:02:58.694 SO libspdk_bdev_gpt.so.6.0 00:02:58.694 SYMLINK libspdk_fsdev_aio.so 00:02:58.694 SO libspdk_bdev_ftl.so.6.0 00:02:58.694 LIB libspdk_sock_posix.a 00:02:58.694 SO libspdk_bdev_iscsi.so.6.0 00:02:58.694 SYMLINK libspdk_bdev_split.so 00:02:58.694 SO libspdk_sock_posix.so.6.0 00:02:58.694 LIB libspdk_bdev_null.a 00:02:58.694 SYMLINK libspdk_bdev_gpt.so 00:02:58.694 SYMLINK libspdk_bdev_ftl.so 00:02:58.694 SYMLINK libspdk_bdev_iscsi.so 00:02:58.694 SO libspdk_bdev_null.so.6.0 00:02:58.694 LIB libspdk_bdev_passthru.a 00:02:58.952 LIB libspdk_bdev_aio.a 00:02:58.952 SO libspdk_bdev_passthru.so.6.0 00:02:58.952 SYMLINK libspdk_sock_posix.so 00:02:58.952 LIB libspdk_bdev_zone_block.a 00:02:58.952 SO libspdk_bdev_aio.so.6.0 00:02:58.952 SYMLINK libspdk_bdev_null.so 00:02:58.952 LIB libspdk_bdev_malloc.a 00:02:58.952 SO libspdk_bdev_zone_block.so.6.0 00:02:58.952 SO libspdk_bdev_malloc.so.6.0 00:02:58.952 SYMLINK libspdk_bdev_passthru.so 00:02:58.952 LIB libspdk_bdev_delay.a 00:02:58.952 SYMLINK libspdk_bdev_aio.so 00:02:58.952 SO libspdk_bdev_delay.so.6.0 00:02:58.952 SYMLINK libspdk_bdev_zone_block.so 00:02:58.952 SYMLINK libspdk_bdev_malloc.so 00:02:58.952 SYMLINK libspdk_bdev_delay.so 00:02:59.211 LIB libspdk_bdev_lvol.a 00:02:59.211 LIB libspdk_bdev_virtio.a 00:02:59.211 SO libspdk_bdev_lvol.so.6.0 00:02:59.211 SO libspdk_bdev_virtio.so.6.0 00:02:59.211 SYMLINK libspdk_bdev_lvol.so 00:02:59.211 SYMLINK libspdk_bdev_virtio.so 00:02:59.776 LIB libspdk_bdev_raid.a 00:02:59.776 SO libspdk_bdev_raid.so.6.0 00:02:59.776 SYMLINK libspdk_bdev_raid.so 00:03:02.306 LIB libspdk_bdev_nvme.a 00:03:02.306 SO libspdk_bdev_nvme.so.7.1 00:03:02.306 SYMLINK libspdk_bdev_nvme.so 00:03:02.565 CC module/event/subsystems/iobuf/iobuf.o 00:03:02.565 CC module/event/subsystems/sock/sock.o 00:03:02.565 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:02.565 CC module/event/subsystems/vmd/vmd.o 00:03:02.565 CC module/event/subsystems/keyring/keyring.o 00:03:02.565 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:02.565 CC module/event/subsystems/scheduler/scheduler.o 00:03:02.565 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:02.565 CC module/event/subsystems/fsdev/fsdev.o 00:03:02.565 LIB libspdk_event_keyring.a 00:03:02.565 LIB libspdk_event_vhost_blk.a 00:03:02.565 LIB libspdk_event_fsdev.a 00:03:02.565 LIB libspdk_event_scheduler.a 00:03:02.565 LIB libspdk_event_sock.a 00:03:02.565 LIB libspdk_event_vmd.a 00:03:02.565 SO libspdk_event_keyring.so.1.0 00:03:02.565 SO libspdk_event_vhost_blk.so.3.0 00:03:02.565 SO libspdk_event_fsdev.so.1.0 00:03:02.565 SO libspdk_event_scheduler.so.4.0 00:03:02.565 LIB libspdk_event_iobuf.a 00:03:02.565 SO libspdk_event_sock.so.5.0 00:03:02.565 SO libspdk_event_vmd.so.6.0 00:03:02.565 SO libspdk_event_iobuf.so.3.0 00:03:02.565 SYMLINK libspdk_event_keyring.so 00:03:02.823 SYMLINK libspdk_event_fsdev.so 00:03:02.823 SYMLINK libspdk_event_vhost_blk.so 00:03:02.823 SYMLINK libspdk_event_scheduler.so 00:03:02.823 SYMLINK libspdk_event_sock.so 00:03:02.823 SYMLINK libspdk_event_vmd.so 00:03:02.823 SYMLINK libspdk_event_iobuf.so 00:03:02.824 CC module/event/subsystems/accel/accel.o 00:03:03.082 LIB libspdk_event_accel.a 00:03:03.082 SO libspdk_event_accel.so.6.0 00:03:03.082 SYMLINK libspdk_event_accel.so 00:03:03.340 CC module/event/subsystems/bdev/bdev.o 00:03:03.598 LIB libspdk_event_bdev.a 00:03:03.598 SO libspdk_event_bdev.so.6.0 00:03:03.598 SYMLINK libspdk_event_bdev.so 00:03:03.863 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:03.863 CC module/event/subsystems/ublk/ublk.o 00:03:03.863 CC module/event/subsystems/nbd/nbd.o 00:03:03.863 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:03.863 CC module/event/subsystems/scsi/scsi.o 00:03:03.863 LIB libspdk_event_nbd.a 00:03:03.863 LIB libspdk_event_ublk.a 00:03:03.863 SO libspdk_event_ublk.so.3.0 00:03:03.863 SO libspdk_event_nbd.so.6.0 00:03:03.863 LIB libspdk_event_scsi.a 00:03:03.863 SO libspdk_event_scsi.so.6.0 00:03:03.863 SYMLINK libspdk_event_ublk.so 00:03:03.863 SYMLINK libspdk_event_nbd.so 00:03:03.863 SYMLINK libspdk_event_scsi.so 00:03:04.127 LIB libspdk_event_nvmf.a 00:03:04.127 SO libspdk_event_nvmf.so.6.0 00:03:04.127 SYMLINK libspdk_event_nvmf.so 00:03:04.127 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:04.127 CC module/event/subsystems/iscsi/iscsi.o 00:03:04.385 LIB libspdk_event_vhost_scsi.a 00:03:04.385 SO libspdk_event_vhost_scsi.so.3.0 00:03:04.385 LIB libspdk_event_iscsi.a 00:03:04.385 SYMLINK libspdk_event_vhost_scsi.so 00:03:04.385 SO libspdk_event_iscsi.so.6.0 00:03:04.385 SYMLINK libspdk_event_iscsi.so 00:03:04.644 SO libspdk.so.6.0 00:03:04.644 SYMLINK libspdk.so 00:03:04.644 CC app/trace_record/trace_record.o 00:03:04.644 CXX app/trace/trace.o 00:03:04.644 CC app/spdk_nvme_identify/identify.o 00:03:04.644 CC app/spdk_lspci/spdk_lspci.o 00:03:04.644 CC app/spdk_top/spdk_top.o 00:03:04.644 CC app/spdk_nvme_perf/perf.o 00:03:04.644 CC app/spdk_nvme_discover/discovery_aer.o 00:03:04.644 CC test/rpc_client/rpc_client_test.o 00:03:04.644 TEST_HEADER include/spdk/accel.h 00:03:04.644 TEST_HEADER include/spdk/accel_module.h 00:03:04.644 TEST_HEADER include/spdk/assert.h 00:03:04.644 TEST_HEADER include/spdk/base64.h 00:03:04.644 TEST_HEADER include/spdk/barrier.h 00:03:04.644 TEST_HEADER include/spdk/bdev.h 00:03:04.644 TEST_HEADER include/spdk/bdev_module.h 00:03:04.644 TEST_HEADER include/spdk/bdev_zone.h 00:03:04.644 TEST_HEADER include/spdk/bit_array.h 00:03:04.644 TEST_HEADER include/spdk/bit_pool.h 00:03:04.644 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:04.644 TEST_HEADER include/spdk/blob_bdev.h 00:03:04.644 TEST_HEADER include/spdk/blobfs.h 00:03:04.644 TEST_HEADER include/spdk/blob.h 00:03:04.644 TEST_HEADER include/spdk/conf.h 00:03:04.644 TEST_HEADER include/spdk/config.h 00:03:04.644 TEST_HEADER include/spdk/cpuset.h 00:03:04.644 TEST_HEADER include/spdk/crc16.h 00:03:04.644 TEST_HEADER include/spdk/crc32.h 00:03:04.644 TEST_HEADER include/spdk/crc64.h 00:03:04.644 TEST_HEADER include/spdk/dif.h 00:03:04.644 TEST_HEADER include/spdk/endian.h 00:03:04.644 TEST_HEADER include/spdk/dma.h 00:03:04.644 TEST_HEADER include/spdk/env_dpdk.h 00:03:04.644 TEST_HEADER include/spdk/env.h 00:03:04.644 TEST_HEADER include/spdk/event.h 00:03:04.644 TEST_HEADER include/spdk/fd_group.h 00:03:04.644 TEST_HEADER include/spdk/fd.h 00:03:04.644 TEST_HEADER include/spdk/file.h 00:03:04.644 TEST_HEADER include/spdk/fsdev.h 00:03:04.644 TEST_HEADER include/spdk/fsdev_module.h 00:03:04.644 TEST_HEADER include/spdk/ftl.h 00:03:04.644 TEST_HEADER include/spdk/gpt_spec.h 00:03:04.644 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:04.644 TEST_HEADER include/spdk/hexlify.h 00:03:04.644 TEST_HEADER include/spdk/histogram_data.h 00:03:04.644 TEST_HEADER include/spdk/idxd.h 00:03:04.644 TEST_HEADER include/spdk/idxd_spec.h 00:03:04.644 TEST_HEADER include/spdk/init.h 00:03:04.644 TEST_HEADER include/spdk/ioat.h 00:03:04.644 TEST_HEADER include/spdk/ioat_spec.h 00:03:04.644 TEST_HEADER include/spdk/iscsi_spec.h 00:03:04.644 TEST_HEADER include/spdk/json.h 00:03:04.644 TEST_HEADER include/spdk/jsonrpc.h 00:03:04.644 TEST_HEADER include/spdk/keyring.h 00:03:04.644 TEST_HEADER include/spdk/keyring_module.h 00:03:04.644 TEST_HEADER include/spdk/likely.h 00:03:04.644 TEST_HEADER include/spdk/log.h 00:03:04.644 TEST_HEADER include/spdk/lvol.h 00:03:04.644 TEST_HEADER include/spdk/md5.h 00:03:04.644 TEST_HEADER include/spdk/memory.h 00:03:04.644 TEST_HEADER include/spdk/mmio.h 00:03:04.644 TEST_HEADER include/spdk/nbd.h 00:03:04.644 TEST_HEADER include/spdk/net.h 00:03:04.644 TEST_HEADER include/spdk/notify.h 00:03:04.644 TEST_HEADER include/spdk/nvme.h 00:03:04.644 TEST_HEADER include/spdk/nvme_intel.h 00:03:04.644 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:04.644 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:04.644 TEST_HEADER include/spdk/nvme_spec.h 00:03:04.644 TEST_HEADER include/spdk/nvme_zns.h 00:03:04.644 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:04.644 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:04.644 TEST_HEADER include/spdk/nvmf.h 00:03:04.644 TEST_HEADER include/spdk/nvmf_spec.h 00:03:04.644 TEST_HEADER include/spdk/nvmf_transport.h 00:03:04.644 TEST_HEADER include/spdk/opal.h 00:03:04.644 TEST_HEADER include/spdk/opal_spec.h 00:03:04.644 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:04.644 TEST_HEADER include/spdk/pci_ids.h 00:03:04.644 TEST_HEADER include/spdk/pipe.h 00:03:04.644 TEST_HEADER include/spdk/reduce.h 00:03:04.644 TEST_HEADER include/spdk/queue.h 00:03:04.644 TEST_HEADER include/spdk/rpc.h 00:03:04.644 TEST_HEADER include/spdk/scheduler.h 00:03:04.644 TEST_HEADER include/spdk/scsi.h 00:03:04.644 TEST_HEADER include/spdk/scsi_spec.h 00:03:04.644 CC app/spdk_dd/spdk_dd.o 00:03:04.644 TEST_HEADER include/spdk/sock.h 00:03:04.911 TEST_HEADER include/spdk/stdinc.h 00:03:04.911 TEST_HEADER include/spdk/thread.h 00:03:04.911 TEST_HEADER include/spdk/string.h 00:03:04.911 TEST_HEADER include/spdk/trace.h 00:03:04.912 TEST_HEADER include/spdk/trace_parser.h 00:03:04.912 TEST_HEADER include/spdk/tree.h 00:03:04.912 TEST_HEADER include/spdk/ublk.h 00:03:04.912 TEST_HEADER include/spdk/util.h 00:03:04.912 TEST_HEADER include/spdk/uuid.h 00:03:04.912 TEST_HEADER include/spdk/version.h 00:03:04.912 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:04.912 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:04.912 TEST_HEADER include/spdk/vhost.h 00:03:04.912 TEST_HEADER include/spdk/vmd.h 00:03:04.912 TEST_HEADER include/spdk/xor.h 00:03:04.912 TEST_HEADER include/spdk/zipf.h 00:03:04.912 CXX test/cpp_headers/accel.o 00:03:04.912 CXX test/cpp_headers/accel_module.o 00:03:04.912 CXX test/cpp_headers/assert.o 00:03:04.912 CXX test/cpp_headers/barrier.o 00:03:04.912 CXX test/cpp_headers/base64.o 00:03:04.912 CXX test/cpp_headers/bdev.o 00:03:04.912 CXX test/cpp_headers/bdev_module.o 00:03:04.912 CXX test/cpp_headers/bdev_zone.o 00:03:04.912 CXX test/cpp_headers/bit_array.o 00:03:04.912 CXX test/cpp_headers/bit_pool.o 00:03:04.912 CXX test/cpp_headers/blob_bdev.o 00:03:04.912 CXX test/cpp_headers/blobfs_bdev.o 00:03:04.912 CXX test/cpp_headers/blobfs.o 00:03:04.912 CC app/iscsi_tgt/iscsi_tgt.o 00:03:04.912 CXX test/cpp_headers/blob.o 00:03:04.912 CXX test/cpp_headers/conf.o 00:03:04.912 CXX test/cpp_headers/config.o 00:03:04.912 CXX test/cpp_headers/cpuset.o 00:03:04.912 CXX test/cpp_headers/crc16.o 00:03:04.912 CC app/nvmf_tgt/nvmf_main.o 00:03:04.912 CC app/spdk_tgt/spdk_tgt.o 00:03:04.912 CXX test/cpp_headers/crc32.o 00:03:04.912 CC examples/ioat/perf/perf.o 00:03:04.912 CC examples/util/zipf/zipf.o 00:03:04.912 CC examples/ioat/verify/verify.o 00:03:04.912 CC test/app/histogram_perf/histogram_perf.o 00:03:04.912 CC test/thread/poller_perf/poller_perf.o 00:03:04.912 CC test/app/stub/stub.o 00:03:04.912 CC test/app/jsoncat/jsoncat.o 00:03:04.912 CC app/fio/nvme/fio_plugin.o 00:03:04.912 CC test/env/pci/pci_ut.o 00:03:04.912 CC test/env/vtophys/vtophys.o 00:03:04.912 CC test/env/memory/memory_ut.o 00:03:04.912 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:04.912 CC test/dma/test_dma/test_dma.o 00:03:04.912 CC app/fio/bdev/fio_plugin.o 00:03:04.912 CC test/app/bdev_svc/bdev_svc.o 00:03:04.912 LINK spdk_lspci 00:03:05.173 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:05.173 CC test/env/mem_callbacks/mem_callbacks.o 00:03:05.173 LINK rpc_client_test 00:03:05.173 LINK interrupt_tgt 00:03:05.173 LINK histogram_perf 00:03:05.173 LINK spdk_nvme_discover 00:03:05.173 LINK zipf 00:03:05.173 LINK jsoncat 00:03:05.173 LINK poller_perf 00:03:05.173 LINK nvmf_tgt 00:03:05.173 LINK vtophys 00:03:05.173 CXX test/cpp_headers/crc64.o 00:03:05.173 CXX test/cpp_headers/dif.o 00:03:05.173 CXX test/cpp_headers/dma.o 00:03:05.173 CXX test/cpp_headers/endian.o 00:03:05.173 LINK env_dpdk_post_init 00:03:05.173 CXX test/cpp_headers/env_dpdk.o 00:03:05.173 CXX test/cpp_headers/env.o 00:03:05.173 CXX test/cpp_headers/event.o 00:03:05.173 CXX test/cpp_headers/fd_group.o 00:03:05.173 LINK iscsi_tgt 00:03:05.173 CXX test/cpp_headers/fd.o 00:03:05.436 CXX test/cpp_headers/file.o 00:03:05.436 CXX test/cpp_headers/fsdev.o 00:03:05.436 LINK stub 00:03:05.436 CXX test/cpp_headers/fsdev_module.o 00:03:05.436 CXX test/cpp_headers/ftl.o 00:03:05.436 LINK spdk_trace_record 00:03:05.436 CXX test/cpp_headers/fuse_dispatcher.o 00:03:05.436 CXX test/cpp_headers/gpt_spec.o 00:03:05.437 CXX test/cpp_headers/hexlify.o 00:03:05.437 LINK spdk_tgt 00:03:05.437 CXX test/cpp_headers/histogram_data.o 00:03:05.437 LINK bdev_svc 00:03:05.437 LINK ioat_perf 00:03:05.437 LINK verify 00:03:05.437 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:05.437 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:05.437 CXX test/cpp_headers/idxd.o 00:03:05.437 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:05.437 CXX test/cpp_headers/idxd_spec.o 00:03:05.437 CXX test/cpp_headers/init.o 00:03:05.699 CXX test/cpp_headers/ioat.o 00:03:05.699 CXX test/cpp_headers/ioat_spec.o 00:03:05.699 CXX test/cpp_headers/iscsi_spec.o 00:03:05.699 LINK spdk_dd 00:03:05.699 CXX test/cpp_headers/json.o 00:03:05.699 CXX test/cpp_headers/jsonrpc.o 00:03:05.699 LINK spdk_trace 00:03:05.699 CXX test/cpp_headers/keyring.o 00:03:05.699 CXX test/cpp_headers/keyring_module.o 00:03:05.699 CXX test/cpp_headers/likely.o 00:03:05.699 CXX test/cpp_headers/log.o 00:03:05.699 CXX test/cpp_headers/lvol.o 00:03:05.699 CXX test/cpp_headers/md5.o 00:03:05.699 CXX test/cpp_headers/memory.o 00:03:05.699 CXX test/cpp_headers/mmio.o 00:03:05.699 CXX test/cpp_headers/nbd.o 00:03:05.699 CXX test/cpp_headers/net.o 00:03:05.699 CXX test/cpp_headers/notify.o 00:03:05.699 CXX test/cpp_headers/nvme.o 00:03:05.699 CXX test/cpp_headers/nvme_intel.o 00:03:05.699 CXX test/cpp_headers/nvme_ocssd.o 00:03:05.699 CXX test/cpp_headers/nvme_spec.o 00:03:05.699 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:05.699 CXX test/cpp_headers/nvme_zns.o 00:03:05.699 CXX test/cpp_headers/nvmf_cmd.o 00:03:05.960 LINK pci_ut 00:03:05.960 CC test/event/event_perf/event_perf.o 00:03:05.960 CC test/event/reactor_perf/reactor_perf.o 00:03:05.960 CC test/event/reactor/reactor.o 00:03:05.960 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:05.960 CXX test/cpp_headers/nvmf.o 00:03:05.960 CXX test/cpp_headers/nvmf_spec.o 00:03:05.960 CC examples/sock/hello_world/hello_sock.o 00:03:05.960 CC examples/thread/thread/thread_ex.o 00:03:05.960 CC test/event/app_repeat/app_repeat.o 00:03:05.960 CXX test/cpp_headers/nvmf_transport.o 00:03:05.960 CC examples/vmd/lsvmd/lsvmd.o 00:03:05.960 CC examples/idxd/perf/perf.o 00:03:05.960 CC test/event/scheduler/scheduler.o 00:03:05.960 CXX test/cpp_headers/opal.o 00:03:05.960 CXX test/cpp_headers/opal_spec.o 00:03:05.960 CXX test/cpp_headers/pci_ids.o 00:03:06.222 LINK nvme_fuzz 00:03:06.222 CXX test/cpp_headers/pipe.o 00:03:06.222 CC examples/vmd/led/led.o 00:03:06.222 LINK test_dma 00:03:06.222 CXX test/cpp_headers/queue.o 00:03:06.222 CXX test/cpp_headers/reduce.o 00:03:06.222 LINK spdk_bdev 00:03:06.222 CXX test/cpp_headers/rpc.o 00:03:06.222 CXX test/cpp_headers/scheduler.o 00:03:06.222 CXX test/cpp_headers/scsi.o 00:03:06.222 CXX test/cpp_headers/scsi_spec.o 00:03:06.222 CXX test/cpp_headers/sock.o 00:03:06.222 CXX test/cpp_headers/stdinc.o 00:03:06.222 CXX test/cpp_headers/string.o 00:03:06.222 LINK event_perf 00:03:06.222 CXX test/cpp_headers/thread.o 00:03:06.222 LINK reactor_perf 00:03:06.222 CXX test/cpp_headers/trace.o 00:03:06.222 LINK reactor 00:03:06.222 CXX test/cpp_headers/trace_parser.o 00:03:06.222 CXX test/cpp_headers/tree.o 00:03:06.222 LINK spdk_nvme 00:03:06.222 CXX test/cpp_headers/ublk.o 00:03:06.222 CXX test/cpp_headers/util.o 00:03:06.222 CXX test/cpp_headers/uuid.o 00:03:06.222 LINK lsvmd 00:03:06.222 CXX test/cpp_headers/version.o 00:03:06.222 CXX test/cpp_headers/vfio_user_pci.o 00:03:06.222 LINK app_repeat 00:03:06.222 LINK mem_callbacks 00:03:06.223 CXX test/cpp_headers/vfio_user_spec.o 00:03:06.223 CC app/vhost/vhost.o 00:03:06.481 CXX test/cpp_headers/vhost.o 00:03:06.481 CXX test/cpp_headers/vmd.o 00:03:06.481 CXX test/cpp_headers/xor.o 00:03:06.481 CXX test/cpp_headers/zipf.o 00:03:06.481 LINK led 00:03:06.481 LINK thread 00:03:06.481 LINK vhost_fuzz 00:03:06.481 LINK hello_sock 00:03:06.481 LINK scheduler 00:03:06.740 LINK vhost 00:03:06.740 CC test/nvme/overhead/overhead.o 00:03:06.740 CC test/nvme/reserve/reserve.o 00:03:06.740 CC test/nvme/sgl/sgl.o 00:03:06.740 CC test/nvme/aer/aer.o 00:03:06.740 CC test/nvme/startup/startup.o 00:03:06.740 CC test/nvme/e2edp/nvme_dp.o 00:03:06.740 CC test/nvme/reset/reset.o 00:03:06.740 CC test/nvme/fdp/fdp.o 00:03:06.740 CC test/nvme/err_injection/err_injection.o 00:03:06.740 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:06.740 CC test/nvme/connect_stress/connect_stress.o 00:03:06.740 CC test/nvme/cuse/cuse.o 00:03:06.740 CC test/nvme/boot_partition/boot_partition.o 00:03:06.740 CC test/nvme/simple_copy/simple_copy.o 00:03:06.740 CC test/nvme/compliance/nvme_compliance.o 00:03:06.740 LINK spdk_nvme_identify 00:03:06.740 CC test/nvme/fused_ordering/fused_ordering.o 00:03:06.740 LINK spdk_nvme_perf 00:03:06.740 LINK idxd_perf 00:03:06.740 CC test/blobfs/mkfs/mkfs.o 00:03:06.740 CC test/accel/dif/dif.o 00:03:06.740 CC test/lvol/esnap/esnap.o 00:03:06.740 LINK spdk_top 00:03:06.999 LINK boot_partition 00:03:06.999 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:06.999 CC examples/nvme/reconnect/reconnect.o 00:03:06.999 CC examples/nvme/hotplug/hotplug.o 00:03:06.999 CC examples/nvme/hello_world/hello_world.o 00:03:06.999 CC examples/nvme/arbitration/arbitration.o 00:03:06.999 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:06.999 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:06.999 CC examples/nvme/abort/abort.o 00:03:06.999 LINK err_injection 00:03:06.999 CC examples/accel/perf/accel_perf.o 00:03:06.999 LINK fused_ordering 00:03:06.999 CC examples/blob/cli/blobcli.o 00:03:06.999 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:06.999 LINK startup 00:03:06.999 CC examples/blob/hello_world/hello_blob.o 00:03:06.999 LINK doorbell_aers 00:03:06.999 LINK mkfs 00:03:07.258 LINK connect_stress 00:03:07.258 LINK overhead 00:03:07.258 LINK reserve 00:03:07.258 LINK aer 00:03:07.258 LINK reset 00:03:07.258 LINK sgl 00:03:07.258 LINK simple_copy 00:03:07.258 LINK nvme_dp 00:03:07.258 LINK hello_world 00:03:07.258 LINK fdp 00:03:07.258 LINK hotplug 00:03:07.258 LINK memory_ut 00:03:07.258 LINK pmr_persistence 00:03:07.258 LINK cmb_copy 00:03:07.516 LINK nvme_compliance 00:03:07.516 LINK hello_blob 00:03:07.516 LINK hello_fsdev 00:03:07.516 LINK reconnect 00:03:07.516 LINK arbitration 00:03:07.516 LINK abort 00:03:07.774 LINK nvme_manage 00:03:07.774 LINK blobcli 00:03:07.774 LINK accel_perf 00:03:08.032 LINK dif 00:03:08.290 CC examples/bdev/hello_world/hello_bdev.o 00:03:08.290 CC examples/bdev/bdevperf/bdevperf.o 00:03:08.290 CC test/bdev/bdevio/bdevio.o 00:03:08.548 LINK iscsi_fuzz 00:03:08.548 LINK hello_bdev 00:03:08.548 LINK cuse 00:03:08.806 LINK bdevio 00:03:09.372 LINK bdevperf 00:03:09.630 CC examples/nvmf/nvmf/nvmf.o 00:03:09.888 LINK nvmf 00:03:14.074 LINK esnap 00:03:14.074 00:03:14.074 real 1m19.776s 00:03:14.074 user 13m7.959s 00:03:14.074 sys 2m34.838s 00:03:14.075 02:23:22 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:14.075 02:23:22 make -- common/autotest_common.sh@10 -- $ set +x 00:03:14.075 ************************************ 00:03:14.075 END TEST make 00:03:14.075 ************************************ 00:03:14.075 02:23:22 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:14.075 02:23:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:14.075 02:23:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:14.075 02:23:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.075 02:23:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:14.075 02:23:22 -- pm/common@44 -- $ pid=2742171 00:03:14.075 02:23:22 -- pm/common@50 -- $ kill -TERM 2742171 00:03:14.075 02:23:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.075 02:23:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:14.075 02:23:22 -- pm/common@44 -- $ pid=2742172 00:03:14.075 02:23:22 -- pm/common@50 -- $ kill -TERM 2742172 00:03:14.075 02:23:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.075 02:23:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:14.075 02:23:22 -- pm/common@44 -- $ pid=2742174 00:03:14.075 02:23:22 -- pm/common@50 -- $ kill -TERM 2742174 00:03:14.075 02:23:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.075 02:23:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:14.075 02:23:22 -- pm/common@44 -- $ pid=2742203 00:03:14.075 02:23:22 -- pm/common@50 -- $ sudo -E kill -TERM 2742203 00:03:14.075 02:23:22 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:14.075 02:23:22 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:14.340 02:23:22 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:14.340 02:23:22 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:14.340 02:23:22 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:14.340 02:23:22 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:14.340 02:23:22 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:14.340 02:23:22 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:14.340 02:23:22 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:14.340 02:23:22 -- scripts/common.sh@336 -- # IFS=.-: 00:03:14.340 02:23:22 -- scripts/common.sh@336 -- # read -ra ver1 00:03:14.340 02:23:22 -- scripts/common.sh@337 -- # IFS=.-: 00:03:14.340 02:23:22 -- scripts/common.sh@337 -- # read -ra ver2 00:03:14.340 02:23:22 -- scripts/common.sh@338 -- # local 'op=<' 00:03:14.340 02:23:22 -- scripts/common.sh@340 -- # ver1_l=2 00:03:14.340 02:23:22 -- scripts/common.sh@341 -- # ver2_l=1 00:03:14.340 02:23:22 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:14.340 02:23:22 -- scripts/common.sh@344 -- # case "$op" in 00:03:14.340 02:23:22 -- scripts/common.sh@345 -- # : 1 00:03:14.340 02:23:22 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:14.340 02:23:22 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:14.340 02:23:22 -- scripts/common.sh@365 -- # decimal 1 00:03:14.340 02:23:22 -- scripts/common.sh@353 -- # local d=1 00:03:14.340 02:23:22 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:14.340 02:23:22 -- scripts/common.sh@355 -- # echo 1 00:03:14.340 02:23:22 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:14.340 02:23:22 -- scripts/common.sh@366 -- # decimal 2 00:03:14.340 02:23:22 -- scripts/common.sh@353 -- # local d=2 00:03:14.340 02:23:22 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:14.340 02:23:22 -- scripts/common.sh@355 -- # echo 2 00:03:14.340 02:23:22 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:14.340 02:23:22 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:14.340 02:23:22 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:14.340 02:23:22 -- scripts/common.sh@368 -- # return 0 00:03:14.340 02:23:22 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:14.340 02:23:22 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:14.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.340 --rc genhtml_branch_coverage=1 00:03:14.340 --rc genhtml_function_coverage=1 00:03:14.340 --rc genhtml_legend=1 00:03:14.340 --rc geninfo_all_blocks=1 00:03:14.340 --rc geninfo_unexecuted_blocks=1 00:03:14.340 00:03:14.340 ' 00:03:14.340 02:23:22 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:14.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.340 --rc genhtml_branch_coverage=1 00:03:14.340 --rc genhtml_function_coverage=1 00:03:14.340 --rc genhtml_legend=1 00:03:14.340 --rc geninfo_all_blocks=1 00:03:14.340 --rc geninfo_unexecuted_blocks=1 00:03:14.340 00:03:14.340 ' 00:03:14.340 02:23:22 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:14.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.340 --rc genhtml_branch_coverage=1 00:03:14.340 --rc genhtml_function_coverage=1 00:03:14.340 --rc genhtml_legend=1 00:03:14.340 --rc geninfo_all_blocks=1 00:03:14.340 --rc geninfo_unexecuted_blocks=1 00:03:14.340 00:03:14.340 ' 00:03:14.340 02:23:22 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:14.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.340 --rc genhtml_branch_coverage=1 00:03:14.340 --rc genhtml_function_coverage=1 00:03:14.340 --rc genhtml_legend=1 00:03:14.340 --rc geninfo_all_blocks=1 00:03:14.340 --rc geninfo_unexecuted_blocks=1 00:03:14.340 00:03:14.340 ' 00:03:14.340 02:23:22 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:14.340 02:23:22 -- nvmf/common.sh@7 -- # uname -s 00:03:14.340 02:23:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:14.340 02:23:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:14.340 02:23:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:14.340 02:23:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:14.340 02:23:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:14.340 02:23:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:14.340 02:23:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:14.340 02:23:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:14.340 02:23:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:14.340 02:23:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:14.340 02:23:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:14.340 02:23:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:14.340 02:23:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:14.340 02:23:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:14.340 02:23:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:14.340 02:23:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:14.340 02:23:22 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:14.340 02:23:22 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:14.340 02:23:22 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:14.340 02:23:22 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:14.340 02:23:22 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:14.341 02:23:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:14.341 02:23:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:14.341 02:23:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:14.341 02:23:22 -- paths/export.sh@5 -- # export PATH 00:03:14.341 02:23:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:14.341 02:23:22 -- nvmf/common.sh@51 -- # : 0 00:03:14.341 02:23:22 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:14.341 02:23:22 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:14.341 02:23:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:14.341 02:23:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:14.341 02:23:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:14.341 02:23:22 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:14.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:14.341 02:23:22 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:14.341 02:23:22 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:14.341 02:23:22 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:14.341 02:23:22 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:14.341 02:23:22 -- spdk/autotest.sh@32 -- # uname -s 00:03:14.341 02:23:22 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:14.341 02:23:22 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:14.341 02:23:22 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:14.341 02:23:22 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:14.341 02:23:22 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:14.341 02:23:22 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:14.341 02:23:22 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:14.341 02:23:22 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:14.341 02:23:22 -- spdk/autotest.sh@48 -- # udevadm_pid=2802200 00:03:14.341 02:23:22 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:14.341 02:23:22 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:14.341 02:23:22 -- pm/common@17 -- # local monitor 00:03:14.341 02:23:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.341 02:23:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.341 02:23:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.341 02:23:22 -- pm/common@21 -- # date +%s 00:03:14.341 02:23:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.341 02:23:22 -- pm/common@21 -- # date +%s 00:03:14.341 02:23:22 -- pm/common@25 -- # sleep 1 00:03:14.341 02:23:22 -- pm/common@21 -- # date +%s 00:03:14.341 02:23:22 -- pm/common@21 -- # date +%s 00:03:14.341 02:23:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731806602 00:03:14.341 02:23:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731806602 00:03:14.341 02:23:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731806602 00:03:14.341 02:23:22 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731806602 00:03:14.341 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731806602_collect-vmstat.pm.log 00:03:14.341 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731806602_collect-cpu-temp.pm.log 00:03:14.341 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731806602_collect-cpu-load.pm.log 00:03:14.341 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731806602_collect-bmc-pm.bmc.pm.log 00:03:15.364 02:23:23 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:15.364 02:23:23 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:15.364 02:23:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:15.365 02:23:23 -- common/autotest_common.sh@10 -- # set +x 00:03:15.365 02:23:23 -- spdk/autotest.sh@59 -- # create_test_list 00:03:15.365 02:23:23 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:15.365 02:23:23 -- common/autotest_common.sh@10 -- # set +x 00:03:15.365 02:23:23 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:15.365 02:23:23 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:15.365 02:23:23 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:15.365 02:23:23 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:15.365 02:23:23 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:15.365 02:23:23 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:15.365 02:23:23 -- common/autotest_common.sh@1457 -- # uname 00:03:15.365 02:23:23 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:15.365 02:23:23 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:15.365 02:23:23 -- common/autotest_common.sh@1477 -- # uname 00:03:15.365 02:23:23 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:15.365 02:23:23 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:15.365 02:23:23 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:15.365 lcov: LCOV version 1.15 00:03:15.365 02:23:23 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:47.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:47.429 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:51.612 02:23:59 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:51.612 02:23:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.612 02:23:59 -- common/autotest_common.sh@10 -- # set +x 00:03:51.612 02:23:59 -- spdk/autotest.sh@78 -- # rm -f 00:03:51.612 02:23:59 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.986 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:52.986 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:52.986 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:52.986 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:52.986 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:52.986 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:52.986 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:52.986 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:52.986 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:52.986 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:52.986 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:52.986 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:52.986 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:52.986 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:52.986 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:52.986 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:52.986 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:52.986 02:24:01 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:52.986 02:24:01 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:52.986 02:24:01 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:52.986 02:24:01 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:52.986 02:24:01 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:52.987 02:24:01 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:52.987 02:24:01 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:52.987 02:24:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:52.987 02:24:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:52.987 02:24:01 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:52.987 02:24:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:52.987 02:24:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:52.987 02:24:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:52.987 02:24:01 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:52.987 02:24:01 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:52.987 No valid GPT data, bailing 00:03:52.987 02:24:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:52.987 02:24:01 -- scripts/common.sh@394 -- # pt= 00:03:52.987 02:24:01 -- scripts/common.sh@395 -- # return 1 00:03:52.987 02:24:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:52.987 1+0 records in 00:03:52.987 1+0 records out 00:03:52.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00415235 s, 253 MB/s 00:03:52.987 02:24:01 -- spdk/autotest.sh@105 -- # sync 00:03:52.987 02:24:01 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:52.987 02:24:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:52.987 02:24:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:55.518 02:24:03 -- spdk/autotest.sh@111 -- # uname -s 00:03:55.518 02:24:03 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:55.518 02:24:03 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:55.518 02:24:03 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:56.455 Hugepages 00:03:56.455 node hugesize free / total 00:03:56.455 node0 1048576kB 0 / 0 00:03:56.455 node0 2048kB 0 / 0 00:03:56.455 node1 1048576kB 0 / 0 00:03:56.455 node1 2048kB 0 / 0 00:03:56.455 00:03:56.455 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:56.455 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:56.455 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:56.455 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:56.455 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:56.455 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:56.455 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:56.455 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:56.455 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:56.455 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:56.455 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:56.455 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:56.455 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:56.455 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:56.455 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:56.455 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:56.455 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:56.455 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:56.714 02:24:04 -- spdk/autotest.sh@117 -- # uname -s 00:03:56.714 02:24:04 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:56.714 02:24:04 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:56.714 02:24:04 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:57.650 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:57.908 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:57.908 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:57.908 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:57.908 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:57.908 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:57.908 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:57.908 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:57.908 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:57.908 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:57.908 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:57.908 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:57.908 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:57.908 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:57.908 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:57.908 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:58.841 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:59.099 02:24:07 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:00.032 02:24:08 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:00.032 02:24:08 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:00.032 02:24:08 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:00.032 02:24:08 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:00.032 02:24:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:00.032 02:24:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:00.032 02:24:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:00.032 02:24:08 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:00.033 02:24:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:00.033 02:24:08 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:00.033 02:24:08 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:04:00.033 02:24:08 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:00.964 Waiting for block devices as requested 00:04:01.222 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:01.222 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:01.222 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:01.481 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:01.481 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:01.481 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:01.740 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:01.740 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:01.740 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:01.740 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:01.997 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:01.997 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:01.997 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:01.997 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:02.256 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:02.256 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:02.256 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:02.514 02:24:10 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:02.514 02:24:10 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:02.514 02:24:10 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:02.514 02:24:10 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:04:02.514 02:24:10 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:02.514 02:24:10 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:02.514 02:24:10 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:02.514 02:24:10 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:02.514 02:24:10 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:02.514 02:24:10 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:02.514 02:24:10 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:02.514 02:24:10 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:02.514 02:24:10 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:02.514 02:24:10 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:02.514 02:24:10 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:02.514 02:24:10 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:02.514 02:24:10 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:02.514 02:24:10 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:02.514 02:24:10 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:02.514 02:24:10 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:02.514 02:24:10 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:02.514 02:24:10 -- common/autotest_common.sh@1543 -- # continue 00:04:02.514 02:24:10 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:02.514 02:24:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:02.514 02:24:10 -- common/autotest_common.sh@10 -- # set +x 00:04:02.514 02:24:10 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:02.514 02:24:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.514 02:24:10 -- common/autotest_common.sh@10 -- # set +x 00:04:02.514 02:24:10 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.891 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:03.891 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:03.891 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:03.891 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:03.891 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:03.891 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:03.891 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:03.891 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:03.891 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:03.891 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:03.891 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:03.891 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:03.891 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:03.891 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:03.891 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:03.891 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:04.827 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:05.086 02:24:13 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:05.086 02:24:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:05.086 02:24:13 -- common/autotest_common.sh@10 -- # set +x 00:04:05.086 02:24:13 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:05.086 02:24:13 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:05.086 02:24:13 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:05.086 02:24:13 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:05.086 02:24:13 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:05.086 02:24:13 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:05.086 02:24:13 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:05.086 02:24:13 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:05.086 02:24:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:05.086 02:24:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:05.086 02:24:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:05.086 02:24:13 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:05.086 02:24:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:05.086 02:24:13 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:05.086 02:24:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:04:05.086 02:24:13 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:05.086 02:24:13 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:05.086 02:24:13 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:05.086 02:24:13 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:05.086 02:24:13 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:05.086 02:24:13 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:05.086 02:24:13 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:04:05.086 02:24:13 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:04:05.086 02:24:13 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2813228 00:04:05.086 02:24:13 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.086 02:24:13 -- common/autotest_common.sh@1585 -- # waitforlisten 2813228 00:04:05.086 02:24:13 -- common/autotest_common.sh@835 -- # '[' -z 2813228 ']' 00:04:05.086 02:24:13 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.087 02:24:13 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:05.087 02:24:13 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.087 02:24:13 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:05.087 02:24:13 -- common/autotest_common.sh@10 -- # set +x 00:04:05.087 [2024-11-17 02:24:13.539440] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:05.087 [2024-11-17 02:24:13.539590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2813228 ] 00:04:05.345 [2024-11-17 02:24:13.678144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.604 [2024-11-17 02:24:13.817544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.540 02:24:14 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:06.540 02:24:14 -- common/autotest_common.sh@868 -- # return 0 00:04:06.540 02:24:14 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:06.540 02:24:14 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:06.540 02:24:14 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:09.822 nvme0n1 00:04:09.822 02:24:17 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:09.822 [2024-11-17 02:24:18.173544] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:09.822 [2024-11-17 02:24:18.173619] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:09.822 request: 00:04:09.822 { 00:04:09.822 "nvme_ctrlr_name": "nvme0", 00:04:09.822 "password": "test", 00:04:09.822 "method": "bdev_nvme_opal_revert", 00:04:09.822 "req_id": 1 00:04:09.822 } 00:04:09.822 Got JSON-RPC error response 00:04:09.822 response: 00:04:09.822 { 00:04:09.822 "code": -32603, 00:04:09.822 "message": "Internal error" 00:04:09.822 } 00:04:09.822 02:24:18 -- common/autotest_common.sh@1591 -- # true 00:04:09.822 02:24:18 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:09.822 02:24:18 -- common/autotest_common.sh@1595 -- # killprocess 2813228 00:04:09.822 02:24:18 -- common/autotest_common.sh@954 -- # '[' -z 2813228 ']' 00:04:09.822 02:24:18 -- common/autotest_common.sh@958 -- # kill -0 2813228 00:04:09.822 02:24:18 -- common/autotest_common.sh@959 -- # uname 00:04:09.822 02:24:18 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:09.822 02:24:18 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2813228 00:04:09.822 02:24:18 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:09.822 02:24:18 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:09.822 02:24:18 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2813228' 00:04:09.822 killing process with pid 2813228 00:04:09.822 02:24:18 -- common/autotest_common.sh@973 -- # kill 2813228 00:04:09.822 02:24:18 -- common/autotest_common.sh@978 -- # wait 2813228 00:04:14.010 02:24:21 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:14.010 02:24:21 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:14.010 02:24:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:14.010 02:24:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:14.010 02:24:21 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:14.010 02:24:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.010 02:24:21 -- common/autotest_common.sh@10 -- # set +x 00:04:14.010 02:24:21 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:14.010 02:24:21 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:14.010 02:24:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.010 02:24:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.010 02:24:21 -- common/autotest_common.sh@10 -- # set +x 00:04:14.010 ************************************ 00:04:14.010 START TEST env 00:04:14.010 ************************************ 00:04:14.010 02:24:21 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:14.010 * Looking for test storage... 00:04:14.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:14.010 02:24:21 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:14.010 02:24:21 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:14.010 02:24:21 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:14.010 02:24:22 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:14.010 02:24:22 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.010 02:24:22 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.010 02:24:22 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.010 02:24:22 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.010 02:24:22 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.010 02:24:22 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.010 02:24:22 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.010 02:24:22 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.010 02:24:22 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.010 02:24:22 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.010 02:24:22 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.010 02:24:22 env -- scripts/common.sh@344 -- # case "$op" in 00:04:14.010 02:24:22 env -- scripts/common.sh@345 -- # : 1 00:04:14.010 02:24:22 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.010 02:24:22 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.010 02:24:22 env -- scripts/common.sh@365 -- # decimal 1 00:04:14.010 02:24:22 env -- scripts/common.sh@353 -- # local d=1 00:04:14.010 02:24:22 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.010 02:24:22 env -- scripts/common.sh@355 -- # echo 1 00:04:14.010 02:24:22 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.010 02:24:22 env -- scripts/common.sh@366 -- # decimal 2 00:04:14.010 02:24:22 env -- scripts/common.sh@353 -- # local d=2 00:04:14.010 02:24:22 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.010 02:24:22 env -- scripts/common.sh@355 -- # echo 2 00:04:14.010 02:24:22 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.010 02:24:22 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.010 02:24:22 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.010 02:24:22 env -- scripts/common.sh@368 -- # return 0 00:04:14.010 02:24:22 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.010 02:24:22 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:14.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.010 --rc genhtml_branch_coverage=1 00:04:14.010 --rc genhtml_function_coverage=1 00:04:14.010 --rc genhtml_legend=1 00:04:14.010 --rc geninfo_all_blocks=1 00:04:14.010 --rc geninfo_unexecuted_blocks=1 00:04:14.010 00:04:14.010 ' 00:04:14.010 02:24:22 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:14.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.010 --rc genhtml_branch_coverage=1 00:04:14.010 --rc genhtml_function_coverage=1 00:04:14.010 --rc genhtml_legend=1 00:04:14.010 --rc geninfo_all_blocks=1 00:04:14.010 --rc geninfo_unexecuted_blocks=1 00:04:14.010 00:04:14.010 ' 00:04:14.010 02:24:22 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:14.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.010 --rc genhtml_branch_coverage=1 00:04:14.010 --rc genhtml_function_coverage=1 00:04:14.010 --rc genhtml_legend=1 00:04:14.010 --rc geninfo_all_blocks=1 00:04:14.010 --rc geninfo_unexecuted_blocks=1 00:04:14.010 00:04:14.010 ' 00:04:14.010 02:24:22 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:14.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.010 --rc genhtml_branch_coverage=1 00:04:14.010 --rc genhtml_function_coverage=1 00:04:14.010 --rc genhtml_legend=1 00:04:14.010 --rc geninfo_all_blocks=1 00:04:14.010 --rc geninfo_unexecuted_blocks=1 00:04:14.010 00:04:14.010 ' 00:04:14.010 02:24:22 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:14.010 02:24:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.010 02:24:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.010 02:24:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.010 ************************************ 00:04:14.010 START TEST env_memory 00:04:14.010 ************************************ 00:04:14.010 02:24:22 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:14.010 00:04:14.010 00:04:14.010 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.010 http://cunit.sourceforge.net/ 00:04:14.010 00:04:14.010 00:04:14.010 Suite: memory 00:04:14.010 Test: alloc and free memory map ...[2024-11-17 02:24:22.136818] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:14.010 passed 00:04:14.011 Test: mem map translation ...[2024-11-17 02:24:22.176996] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:14.011 [2024-11-17 02:24:22.177037] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:14.011 [2024-11-17 02:24:22.177128] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:14.011 [2024-11-17 02:24:22.177160] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:14.011 passed 00:04:14.011 Test: mem map registration ...[2024-11-17 02:24:22.245786] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:14.011 [2024-11-17 02:24:22.245843] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:14.011 passed 00:04:14.011 Test: mem map adjacent registrations ...passed 00:04:14.011 00:04:14.011 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.011 suites 1 1 n/a 0 0 00:04:14.011 tests 4 4 4 0 0 00:04:14.011 asserts 152 152 152 0 n/a 00:04:14.011 00:04:14.011 Elapsed time = 0.234 seconds 00:04:14.011 00:04:14.011 real 0m0.255s 00:04:14.011 user 0m0.234s 00:04:14.011 sys 0m0.020s 00:04:14.011 02:24:22 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.011 02:24:22 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:14.011 ************************************ 00:04:14.011 END TEST env_memory 00:04:14.011 ************************************ 00:04:14.011 02:24:22 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:14.011 02:24:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.011 02:24:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.011 02:24:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.011 ************************************ 00:04:14.011 START TEST env_vtophys 00:04:14.011 ************************************ 00:04:14.011 02:24:22 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:14.011 EAL: lib.eal log level changed from notice to debug 00:04:14.011 EAL: Detected lcore 0 as core 0 on socket 0 00:04:14.011 EAL: Detected lcore 1 as core 1 on socket 0 00:04:14.011 EAL: Detected lcore 2 as core 2 on socket 0 00:04:14.011 EAL: Detected lcore 3 as core 3 on socket 0 00:04:14.011 EAL: Detected lcore 4 as core 4 on socket 0 00:04:14.011 EAL: Detected lcore 5 as core 5 on socket 0 00:04:14.011 EAL: Detected lcore 6 as core 8 on socket 0 00:04:14.011 EAL: Detected lcore 7 as core 9 on socket 0 00:04:14.011 EAL: Detected lcore 8 as core 10 on socket 0 00:04:14.011 EAL: Detected lcore 9 as core 11 on socket 0 00:04:14.011 EAL: Detected lcore 10 as core 12 on socket 0 00:04:14.011 EAL: Detected lcore 11 as core 13 on socket 0 00:04:14.011 EAL: Detected lcore 12 as core 0 on socket 1 00:04:14.011 EAL: Detected lcore 13 as core 1 on socket 1 00:04:14.011 EAL: Detected lcore 14 as core 2 on socket 1 00:04:14.011 EAL: Detected lcore 15 as core 3 on socket 1 00:04:14.011 EAL: Detected lcore 16 as core 4 on socket 1 00:04:14.011 EAL: Detected lcore 17 as core 5 on socket 1 00:04:14.011 EAL: Detected lcore 18 as core 8 on socket 1 00:04:14.011 EAL: Detected lcore 19 as core 9 on socket 1 00:04:14.011 EAL: Detected lcore 20 as core 10 on socket 1 00:04:14.011 EAL: Detected lcore 21 as core 11 on socket 1 00:04:14.011 EAL: Detected lcore 22 as core 12 on socket 1 00:04:14.011 EAL: Detected lcore 23 as core 13 on socket 1 00:04:14.011 EAL: Detected lcore 24 as core 0 on socket 0 00:04:14.011 EAL: Detected lcore 25 as core 1 on socket 0 00:04:14.011 EAL: Detected lcore 26 as core 2 on socket 0 00:04:14.011 EAL: Detected lcore 27 as core 3 on socket 0 00:04:14.011 EAL: Detected lcore 28 as core 4 on socket 0 00:04:14.011 EAL: Detected lcore 29 as core 5 on socket 0 00:04:14.011 EAL: Detected lcore 30 as core 8 on socket 0 00:04:14.011 EAL: Detected lcore 31 as core 9 on socket 0 00:04:14.011 EAL: Detected lcore 32 as core 10 on socket 0 00:04:14.011 EAL: Detected lcore 33 as core 11 on socket 0 00:04:14.011 EAL: Detected lcore 34 as core 12 on socket 0 00:04:14.011 EAL: Detected lcore 35 as core 13 on socket 0 00:04:14.011 EAL: Detected lcore 36 as core 0 on socket 1 00:04:14.011 EAL: Detected lcore 37 as core 1 on socket 1 00:04:14.011 EAL: Detected lcore 38 as core 2 on socket 1 00:04:14.011 EAL: Detected lcore 39 as core 3 on socket 1 00:04:14.011 EAL: Detected lcore 40 as core 4 on socket 1 00:04:14.011 EAL: Detected lcore 41 as core 5 on socket 1 00:04:14.011 EAL: Detected lcore 42 as core 8 on socket 1 00:04:14.011 EAL: Detected lcore 43 as core 9 on socket 1 00:04:14.011 EAL: Detected lcore 44 as core 10 on socket 1 00:04:14.011 EAL: Detected lcore 45 as core 11 on socket 1 00:04:14.011 EAL: Detected lcore 46 as core 12 on socket 1 00:04:14.011 EAL: Detected lcore 47 as core 13 on socket 1 00:04:14.011 EAL: Maximum logical cores by configuration: 128 00:04:14.011 EAL: Detected CPU lcores: 48 00:04:14.011 EAL: Detected NUMA nodes: 2 00:04:14.011 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:14.011 EAL: Detected shared linkage of DPDK 00:04:14.011 EAL: No shared files mode enabled, IPC will be disabled 00:04:14.270 EAL: Bus pci wants IOVA as 'DC' 00:04:14.270 EAL: Buses did not request a specific IOVA mode. 00:04:14.271 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:14.271 EAL: Selected IOVA mode 'VA' 00:04:14.271 EAL: Probing VFIO support... 00:04:14.271 EAL: IOMMU type 1 (Type 1) is supported 00:04:14.271 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:14.271 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:14.271 EAL: VFIO support initialized 00:04:14.271 EAL: Ask a virtual area of 0x2e000 bytes 00:04:14.271 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:14.271 EAL: Setting up physically contiguous memory... 00:04:14.271 EAL: Setting maximum number of open files to 524288 00:04:14.271 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:14.271 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:14.271 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:14.271 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.271 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:14.271 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.271 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.271 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:14.271 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:14.271 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.271 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:14.271 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.271 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.271 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:14.271 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:14.271 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.271 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:14.271 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.271 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.271 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:14.271 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:14.271 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.271 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:14.271 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.271 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.271 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:14.271 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:14.271 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:14.271 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.271 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:14.271 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:14.271 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.271 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:14.271 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:14.271 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.271 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:14.271 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:14.271 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.271 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:14.271 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:14.271 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.271 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:14.271 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:14.271 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.271 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:14.271 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:14.271 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.271 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:14.271 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:14.271 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.271 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:14.271 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:14.271 EAL: Hugepages will be freed exactly as allocated. 00:04:14.271 EAL: No shared files mode enabled, IPC is disabled 00:04:14.271 EAL: No shared files mode enabled, IPC is disabled 00:04:14.271 EAL: TSC frequency is ~2700000 KHz 00:04:14.271 EAL: Main lcore 0 is ready (tid=7f998a86fa40;cpuset=[0]) 00:04:14.271 EAL: Trying to obtain current memory policy. 00:04:14.271 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.271 EAL: Restoring previous memory policy: 0 00:04:14.271 EAL: request: mp_malloc_sync 00:04:14.271 EAL: No shared files mode enabled, IPC is disabled 00:04:14.271 EAL: Heap on socket 0 was expanded by 2MB 00:04:14.271 EAL: No shared files mode enabled, IPC is disabled 00:04:14.271 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:14.271 EAL: Mem event callback 'spdk:(nil)' registered 00:04:14.271 00:04:14.271 00:04:14.271 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.271 http://cunit.sourceforge.net/ 00:04:14.271 00:04:14.271 00:04:14.271 Suite: components_suite 00:04:14.529 Test: vtophys_malloc_test ...passed 00:04:14.529 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:14.529 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.529 EAL: Restoring previous memory policy: 4 00:04:14.529 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.529 EAL: request: mp_malloc_sync 00:04:14.529 EAL: No shared files mode enabled, IPC is disabled 00:04:14.529 EAL: Heap on socket 0 was expanded by 4MB 00:04:14.529 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.529 EAL: request: mp_malloc_sync 00:04:14.529 EAL: No shared files mode enabled, IPC is disabled 00:04:14.529 EAL: Heap on socket 0 was shrunk by 4MB 00:04:14.529 EAL: Trying to obtain current memory policy. 00:04:14.529 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.529 EAL: Restoring previous memory policy: 4 00:04:14.529 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.529 EAL: request: mp_malloc_sync 00:04:14.529 EAL: No shared files mode enabled, IPC is disabled 00:04:14.530 EAL: Heap on socket 0 was expanded by 6MB 00:04:14.788 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.788 EAL: request: mp_malloc_sync 00:04:14.788 EAL: No shared files mode enabled, IPC is disabled 00:04:14.788 EAL: Heap on socket 0 was shrunk by 6MB 00:04:14.788 EAL: Trying to obtain current memory policy. 00:04:14.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.788 EAL: Restoring previous memory policy: 4 00:04:14.788 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.788 EAL: request: mp_malloc_sync 00:04:14.788 EAL: No shared files mode enabled, IPC is disabled 00:04:14.788 EAL: Heap on socket 0 was expanded by 10MB 00:04:14.788 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.788 EAL: request: mp_malloc_sync 00:04:14.788 EAL: No shared files mode enabled, IPC is disabled 00:04:14.788 EAL: Heap on socket 0 was shrunk by 10MB 00:04:14.788 EAL: Trying to obtain current memory policy. 00:04:14.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.788 EAL: Restoring previous memory policy: 4 00:04:14.788 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.788 EAL: request: mp_malloc_sync 00:04:14.788 EAL: No shared files mode enabled, IPC is disabled 00:04:14.788 EAL: Heap on socket 0 was expanded by 18MB 00:04:14.788 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.788 EAL: request: mp_malloc_sync 00:04:14.788 EAL: No shared files mode enabled, IPC is disabled 00:04:14.788 EAL: Heap on socket 0 was shrunk by 18MB 00:04:14.788 EAL: Trying to obtain current memory policy. 00:04:14.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.788 EAL: Restoring previous memory policy: 4 00:04:14.788 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.788 EAL: request: mp_malloc_sync 00:04:14.788 EAL: No shared files mode enabled, IPC is disabled 00:04:14.788 EAL: Heap on socket 0 was expanded by 34MB 00:04:14.788 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.788 EAL: request: mp_malloc_sync 00:04:14.788 EAL: No shared files mode enabled, IPC is disabled 00:04:14.788 EAL: Heap on socket 0 was shrunk by 34MB 00:04:14.788 EAL: Trying to obtain current memory policy. 00:04:14.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.788 EAL: Restoring previous memory policy: 4 00:04:14.788 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.788 EAL: request: mp_malloc_sync 00:04:14.788 EAL: No shared files mode enabled, IPC is disabled 00:04:14.788 EAL: Heap on socket 0 was expanded by 66MB 00:04:15.047 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.047 EAL: request: mp_malloc_sync 00:04:15.047 EAL: No shared files mode enabled, IPC is disabled 00:04:15.047 EAL: Heap on socket 0 was shrunk by 66MB 00:04:15.047 EAL: Trying to obtain current memory policy. 00:04:15.047 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.047 EAL: Restoring previous memory policy: 4 00:04:15.047 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.047 EAL: request: mp_malloc_sync 00:04:15.047 EAL: No shared files mode enabled, IPC is disabled 00:04:15.047 EAL: Heap on socket 0 was expanded by 130MB 00:04:15.305 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.305 EAL: request: mp_malloc_sync 00:04:15.305 EAL: No shared files mode enabled, IPC is disabled 00:04:15.305 EAL: Heap on socket 0 was shrunk by 130MB 00:04:15.563 EAL: Trying to obtain current memory policy. 00:04:15.563 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.563 EAL: Restoring previous memory policy: 4 00:04:15.563 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.563 EAL: request: mp_malloc_sync 00:04:15.563 EAL: No shared files mode enabled, IPC is disabled 00:04:15.563 EAL: Heap on socket 0 was expanded by 258MB 00:04:16.129 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.129 EAL: request: mp_malloc_sync 00:04:16.129 EAL: No shared files mode enabled, IPC is disabled 00:04:16.129 EAL: Heap on socket 0 was shrunk by 258MB 00:04:16.695 EAL: Trying to obtain current memory policy. 00:04:16.695 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.695 EAL: Restoring previous memory policy: 4 00:04:16.695 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.695 EAL: request: mp_malloc_sync 00:04:16.695 EAL: No shared files mode enabled, IPC is disabled 00:04:16.695 EAL: Heap on socket 0 was expanded by 514MB 00:04:17.630 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.888 EAL: request: mp_malloc_sync 00:04:17.888 EAL: No shared files mode enabled, IPC is disabled 00:04:17.888 EAL: Heap on socket 0 was shrunk by 514MB 00:04:18.820 EAL: Trying to obtain current memory policy. 00:04:18.821 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.821 EAL: Restoring previous memory policy: 4 00:04:18.821 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.821 EAL: request: mp_malloc_sync 00:04:18.821 EAL: No shared files mode enabled, IPC is disabled 00:04:18.821 EAL: Heap on socket 0 was expanded by 1026MB 00:04:20.779 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.036 EAL: request: mp_malloc_sync 00:04:21.036 EAL: No shared files mode enabled, IPC is disabled 00:04:21.036 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:22.937 passed 00:04:22.937 00:04:22.937 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.937 suites 1 1 n/a 0 0 00:04:22.937 tests 2 2 2 0 0 00:04:22.937 asserts 497 497 497 0 n/a 00:04:22.937 00:04:22.937 Elapsed time = 8.255 seconds 00:04:22.937 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.937 EAL: request: mp_malloc_sync 00:04:22.937 EAL: No shared files mode enabled, IPC is disabled 00:04:22.937 EAL: Heap on socket 0 was shrunk by 2MB 00:04:22.937 EAL: No shared files mode enabled, IPC is disabled 00:04:22.937 EAL: No shared files mode enabled, IPC is disabled 00:04:22.938 EAL: No shared files mode enabled, IPC is disabled 00:04:22.938 00:04:22.938 real 0m8.534s 00:04:22.938 user 0m7.401s 00:04:22.938 sys 0m1.070s 00:04:22.938 02:24:30 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.938 02:24:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:22.938 ************************************ 00:04:22.938 END TEST env_vtophys 00:04:22.938 ************************************ 00:04:22.938 02:24:30 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:22.938 02:24:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.938 02:24:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.938 02:24:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.938 ************************************ 00:04:22.938 START TEST env_pci 00:04:22.938 ************************************ 00:04:22.938 02:24:30 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:22.938 00:04:22.938 00:04:22.938 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.938 http://cunit.sourceforge.net/ 00:04:22.938 00:04:22.938 00:04:22.938 Suite: pci 00:04:22.938 Test: pci_hook ...[2024-11-17 02:24:31.005674] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2815330 has claimed it 00:04:22.938 EAL: Cannot find device (10000:00:01.0) 00:04:22.938 EAL: Failed to attach device on primary process 00:04:22.938 passed 00:04:22.938 00:04:22.938 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.938 suites 1 1 n/a 0 0 00:04:22.938 tests 1 1 1 0 0 00:04:22.938 asserts 25 25 25 0 n/a 00:04:22.938 00:04:22.938 Elapsed time = 0.043 seconds 00:04:22.938 00:04:22.938 real 0m0.097s 00:04:22.938 user 0m0.032s 00:04:22.938 sys 0m0.064s 00:04:22.938 02:24:31 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.938 02:24:31 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:22.938 ************************************ 00:04:22.938 END TEST env_pci 00:04:22.938 ************************************ 00:04:22.938 02:24:31 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:22.938 02:24:31 env -- env/env.sh@15 -- # uname 00:04:22.938 02:24:31 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:22.938 02:24:31 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:22.938 02:24:31 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:22.938 02:24:31 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:22.938 02:24:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.938 02:24:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.938 ************************************ 00:04:22.938 START TEST env_dpdk_post_init 00:04:22.938 ************************************ 00:04:22.938 02:24:31 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:22.938 EAL: Detected CPU lcores: 48 00:04:22.938 EAL: Detected NUMA nodes: 2 00:04:22.938 EAL: Detected shared linkage of DPDK 00:04:22.938 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:22.938 EAL: Selected IOVA mode 'VA' 00:04:22.938 EAL: VFIO support initialized 00:04:22.938 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:22.938 EAL: Using IOMMU type 1 (Type 1) 00:04:22.938 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:23.197 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:23.197 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:23.197 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:23.197 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:23.197 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:23.197 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:23.197 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:23.197 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:23.197 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:23.197 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:23.197 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:23.197 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:23.197 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:23.197 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:23.197 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:24.133 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:27.414 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:27.414 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:27.414 Starting DPDK initialization... 00:04:27.414 Starting SPDK post initialization... 00:04:27.414 SPDK NVMe probe 00:04:27.414 Attaching to 0000:88:00.0 00:04:27.414 Attached to 0000:88:00.0 00:04:27.414 Cleaning up... 00:04:27.414 00:04:27.414 real 0m4.590s 00:04:27.414 user 0m3.114s 00:04:27.414 sys 0m0.532s 00:04:27.414 02:24:35 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.414 02:24:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:27.414 ************************************ 00:04:27.414 END TEST env_dpdk_post_init 00:04:27.414 ************************************ 00:04:27.414 02:24:35 env -- env/env.sh@26 -- # uname 00:04:27.414 02:24:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:27.414 02:24:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:27.414 02:24:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.414 02:24:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.414 02:24:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.414 ************************************ 00:04:27.414 START TEST env_mem_callbacks 00:04:27.414 ************************************ 00:04:27.414 02:24:35 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:27.414 EAL: Detected CPU lcores: 48 00:04:27.414 EAL: Detected NUMA nodes: 2 00:04:27.414 EAL: Detected shared linkage of DPDK 00:04:27.414 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:27.672 EAL: Selected IOVA mode 'VA' 00:04:27.672 EAL: VFIO support initialized 00:04:27.672 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:27.672 00:04:27.672 00:04:27.672 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.672 http://cunit.sourceforge.net/ 00:04:27.672 00:04:27.672 00:04:27.672 Suite: memory 00:04:27.672 Test: test ... 00:04:27.672 register 0x200000200000 2097152 00:04:27.672 malloc 3145728 00:04:27.672 register 0x200000400000 4194304 00:04:27.672 buf 0x2000004fffc0 len 3145728 PASSED 00:04:27.672 malloc 64 00:04:27.672 buf 0x2000004ffec0 len 64 PASSED 00:04:27.672 malloc 4194304 00:04:27.672 register 0x200000800000 6291456 00:04:27.672 buf 0x2000009fffc0 len 4194304 PASSED 00:04:27.672 free 0x2000004fffc0 3145728 00:04:27.672 free 0x2000004ffec0 64 00:04:27.672 unregister 0x200000400000 4194304 PASSED 00:04:27.672 free 0x2000009fffc0 4194304 00:04:27.672 unregister 0x200000800000 6291456 PASSED 00:04:27.672 malloc 8388608 00:04:27.672 register 0x200000400000 10485760 00:04:27.672 buf 0x2000005fffc0 len 8388608 PASSED 00:04:27.672 free 0x2000005fffc0 8388608 00:04:27.672 unregister 0x200000400000 10485760 PASSED 00:04:27.672 passed 00:04:27.672 00:04:27.672 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.672 suites 1 1 n/a 0 0 00:04:27.672 tests 1 1 1 0 0 00:04:27.672 asserts 15 15 15 0 n/a 00:04:27.672 00:04:27.672 Elapsed time = 0.060 seconds 00:04:27.672 00:04:27.672 real 0m0.195s 00:04:27.672 user 0m0.108s 00:04:27.672 sys 0m0.086s 00:04:27.672 02:24:35 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.672 02:24:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:27.672 ************************************ 00:04:27.672 END TEST env_mem_callbacks 00:04:27.672 ************************************ 00:04:27.672 00:04:27.672 real 0m14.068s 00:04:27.672 user 0m11.093s 00:04:27.672 sys 0m1.989s 00:04:27.672 02:24:35 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.672 02:24:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.672 ************************************ 00:04:27.672 END TEST env 00:04:27.672 ************************************ 00:04:27.672 02:24:36 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:27.672 02:24:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.672 02:24:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.672 02:24:36 -- common/autotest_common.sh@10 -- # set +x 00:04:27.672 ************************************ 00:04:27.672 START TEST rpc 00:04:27.672 ************************************ 00:04:27.672 02:24:36 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:27.672 * Looking for test storage... 00:04:27.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:27.672 02:24:36 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:27.672 02:24:36 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:27.672 02:24:36 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:27.931 02:24:36 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:27.931 02:24:36 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.931 02:24:36 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.931 02:24:36 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.931 02:24:36 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.931 02:24:36 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.931 02:24:36 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.931 02:24:36 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.931 02:24:36 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.931 02:24:36 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.931 02:24:36 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.931 02:24:36 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.931 02:24:36 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:27.931 02:24:36 rpc -- scripts/common.sh@345 -- # : 1 00:04:27.931 02:24:36 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.931 02:24:36 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.931 02:24:36 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:27.931 02:24:36 rpc -- scripts/common.sh@353 -- # local d=1 00:04:27.931 02:24:36 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.931 02:24:36 rpc -- scripts/common.sh@355 -- # echo 1 00:04:27.931 02:24:36 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.931 02:24:36 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:27.931 02:24:36 rpc -- scripts/common.sh@353 -- # local d=2 00:04:27.931 02:24:36 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.931 02:24:36 rpc -- scripts/common.sh@355 -- # echo 2 00:04:27.931 02:24:36 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.931 02:24:36 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.931 02:24:36 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.931 02:24:36 rpc -- scripts/common.sh@368 -- # return 0 00:04:27.931 02:24:36 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.931 02:24:36 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:27.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.931 --rc genhtml_branch_coverage=1 00:04:27.931 --rc genhtml_function_coverage=1 00:04:27.931 --rc genhtml_legend=1 00:04:27.931 --rc geninfo_all_blocks=1 00:04:27.931 --rc geninfo_unexecuted_blocks=1 00:04:27.931 00:04:27.931 ' 00:04:27.931 02:24:36 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:27.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.931 --rc genhtml_branch_coverage=1 00:04:27.931 --rc genhtml_function_coverage=1 00:04:27.931 --rc genhtml_legend=1 00:04:27.931 --rc geninfo_all_blocks=1 00:04:27.931 --rc geninfo_unexecuted_blocks=1 00:04:27.931 00:04:27.931 ' 00:04:27.931 02:24:36 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:27.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.931 --rc genhtml_branch_coverage=1 00:04:27.931 --rc genhtml_function_coverage=1 00:04:27.931 --rc genhtml_legend=1 00:04:27.931 --rc geninfo_all_blocks=1 00:04:27.931 --rc geninfo_unexecuted_blocks=1 00:04:27.931 00:04:27.931 ' 00:04:27.931 02:24:36 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:27.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.931 --rc genhtml_branch_coverage=1 00:04:27.931 --rc genhtml_function_coverage=1 00:04:27.931 --rc genhtml_legend=1 00:04:27.931 --rc geninfo_all_blocks=1 00:04:27.931 --rc geninfo_unexecuted_blocks=1 00:04:27.931 00:04:27.931 ' 00:04:27.931 02:24:36 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2816123 00:04:27.931 02:24:36 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:27.931 02:24:36 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.931 02:24:36 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2816123 00:04:27.931 02:24:36 rpc -- common/autotest_common.sh@835 -- # '[' -z 2816123 ']' 00:04:27.931 02:24:36 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.931 02:24:36 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.931 02:24:36 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.931 02:24:36 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.931 02:24:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.931 [2024-11-17 02:24:36.281994] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:27.931 [2024-11-17 02:24:36.282195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2816123 ] 00:04:28.190 [2024-11-17 02:24:36.418092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.190 [2024-11-17 02:24:36.548666] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:28.190 [2024-11-17 02:24:36.548763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2816123' to capture a snapshot of events at runtime. 00:04:28.190 [2024-11-17 02:24:36.548791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:28.190 [2024-11-17 02:24:36.548812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:28.190 [2024-11-17 02:24:36.548841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2816123 for offline analysis/debug. 00:04:28.190 [2024-11-17 02:24:36.550451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.124 02:24:37 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.124 02:24:37 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:29.124 02:24:37 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:29.124 02:24:37 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:29.124 02:24:37 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:29.124 02:24:37 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:29.124 02:24:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.124 02:24:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.124 02:24:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.124 ************************************ 00:04:29.124 START TEST rpc_integrity 00:04:29.124 ************************************ 00:04:29.124 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:29.124 02:24:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:29.124 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.124 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.124 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.124 02:24:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:29.124 02:24:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:29.124 02:24:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:29.124 02:24:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:29.124 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.124 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.124 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.124 02:24:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:29.124 02:24:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:29.124 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.124 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.382 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.382 02:24:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:29.382 { 00:04:29.382 "name": "Malloc0", 00:04:29.382 "aliases": [ 00:04:29.382 "9e9e25b5-4615-4d8c-a1e6-ac70cacb735a" 00:04:29.382 ], 00:04:29.382 "product_name": "Malloc disk", 00:04:29.382 "block_size": 512, 00:04:29.382 "num_blocks": 16384, 00:04:29.382 "uuid": "9e9e25b5-4615-4d8c-a1e6-ac70cacb735a", 00:04:29.382 "assigned_rate_limits": { 00:04:29.382 "rw_ios_per_sec": 0, 00:04:29.382 "rw_mbytes_per_sec": 0, 00:04:29.382 "r_mbytes_per_sec": 0, 00:04:29.382 "w_mbytes_per_sec": 0 00:04:29.382 }, 00:04:29.382 "claimed": false, 00:04:29.382 "zoned": false, 00:04:29.382 "supported_io_types": { 00:04:29.382 "read": true, 00:04:29.382 "write": true, 00:04:29.382 "unmap": true, 00:04:29.382 "flush": true, 00:04:29.382 "reset": true, 00:04:29.382 "nvme_admin": false, 00:04:29.382 "nvme_io": false, 00:04:29.382 "nvme_io_md": false, 00:04:29.382 "write_zeroes": true, 00:04:29.382 "zcopy": true, 00:04:29.382 "get_zone_info": false, 00:04:29.382 "zone_management": false, 00:04:29.382 "zone_append": false, 00:04:29.382 "compare": false, 00:04:29.382 "compare_and_write": false, 00:04:29.382 "abort": true, 00:04:29.382 "seek_hole": false, 00:04:29.382 "seek_data": false, 00:04:29.382 "copy": true, 00:04:29.382 "nvme_iov_md": false 00:04:29.382 }, 00:04:29.382 "memory_domains": [ 00:04:29.382 { 00:04:29.382 "dma_device_id": "system", 00:04:29.382 "dma_device_type": 1 00:04:29.382 }, 00:04:29.382 { 00:04:29.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.382 "dma_device_type": 2 00:04:29.382 } 00:04:29.382 ], 00:04:29.382 "driver_specific": {} 00:04:29.382 } 00:04:29.382 ]' 00:04:29.383 02:24:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:29.383 02:24:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:29.383 02:24:37 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:29.383 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.383 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.383 [2024-11-17 02:24:37.626649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:29.383 [2024-11-17 02:24:37.626717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:29.383 [2024-11-17 02:24:37.626765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:04:29.383 [2024-11-17 02:24:37.626790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:29.383 [2024-11-17 02:24:37.629623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:29.383 [2024-11-17 02:24:37.629661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:29.383 Passthru0 00:04:29.383 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.383 02:24:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:29.383 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.383 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.383 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.383 02:24:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:29.383 { 00:04:29.383 "name": "Malloc0", 00:04:29.383 "aliases": [ 00:04:29.383 "9e9e25b5-4615-4d8c-a1e6-ac70cacb735a" 00:04:29.383 ], 00:04:29.383 "product_name": "Malloc disk", 00:04:29.383 "block_size": 512, 00:04:29.383 "num_blocks": 16384, 00:04:29.383 "uuid": "9e9e25b5-4615-4d8c-a1e6-ac70cacb735a", 00:04:29.383 "assigned_rate_limits": { 00:04:29.383 "rw_ios_per_sec": 0, 00:04:29.383 "rw_mbytes_per_sec": 0, 00:04:29.383 "r_mbytes_per_sec": 0, 00:04:29.383 "w_mbytes_per_sec": 0 00:04:29.383 }, 00:04:29.383 "claimed": true, 00:04:29.383 "claim_type": "exclusive_write", 00:04:29.383 "zoned": false, 00:04:29.383 "supported_io_types": { 00:04:29.383 "read": true, 00:04:29.383 "write": true, 00:04:29.383 "unmap": true, 00:04:29.383 "flush": true, 00:04:29.383 "reset": true, 00:04:29.383 "nvme_admin": false, 00:04:29.383 "nvme_io": false, 00:04:29.383 "nvme_io_md": false, 00:04:29.383 "write_zeroes": true, 00:04:29.383 "zcopy": true, 00:04:29.383 "get_zone_info": false, 00:04:29.383 "zone_management": false, 00:04:29.383 "zone_append": false, 00:04:29.383 "compare": false, 00:04:29.383 "compare_and_write": false, 00:04:29.383 "abort": true, 00:04:29.383 "seek_hole": false, 00:04:29.383 "seek_data": false, 00:04:29.383 "copy": true, 00:04:29.383 "nvme_iov_md": false 00:04:29.383 }, 00:04:29.383 "memory_domains": [ 00:04:29.383 { 00:04:29.383 "dma_device_id": "system", 00:04:29.383 "dma_device_type": 1 00:04:29.383 }, 00:04:29.383 { 00:04:29.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.383 "dma_device_type": 2 00:04:29.383 } 00:04:29.383 ], 00:04:29.383 "driver_specific": {} 00:04:29.383 }, 00:04:29.383 { 00:04:29.383 "name": "Passthru0", 00:04:29.383 "aliases": [ 00:04:29.383 "cb0b0b59-e3d1-5861-a9ef-fc6d425e035d" 00:04:29.383 ], 00:04:29.383 "product_name": "passthru", 00:04:29.383 "block_size": 512, 00:04:29.383 "num_blocks": 16384, 00:04:29.383 "uuid": "cb0b0b59-e3d1-5861-a9ef-fc6d425e035d", 00:04:29.383 "assigned_rate_limits": { 00:04:29.383 "rw_ios_per_sec": 0, 00:04:29.383 "rw_mbytes_per_sec": 0, 00:04:29.383 "r_mbytes_per_sec": 0, 00:04:29.383 "w_mbytes_per_sec": 0 00:04:29.383 }, 00:04:29.383 "claimed": false, 00:04:29.383 "zoned": false, 00:04:29.383 "supported_io_types": { 00:04:29.383 "read": true, 00:04:29.383 "write": true, 00:04:29.383 "unmap": true, 00:04:29.383 "flush": true, 00:04:29.383 "reset": true, 00:04:29.383 "nvme_admin": false, 00:04:29.383 "nvme_io": false, 00:04:29.383 "nvme_io_md": false, 00:04:29.383 "write_zeroes": true, 00:04:29.383 "zcopy": true, 00:04:29.383 "get_zone_info": false, 00:04:29.383 "zone_management": false, 00:04:29.383 "zone_append": false, 00:04:29.383 "compare": false, 00:04:29.383 "compare_and_write": false, 00:04:29.383 "abort": true, 00:04:29.383 "seek_hole": false, 00:04:29.383 "seek_data": false, 00:04:29.383 "copy": true, 00:04:29.383 "nvme_iov_md": false 00:04:29.383 }, 00:04:29.383 "memory_domains": [ 00:04:29.383 { 00:04:29.383 "dma_device_id": "system", 00:04:29.383 "dma_device_type": 1 00:04:29.383 }, 00:04:29.383 { 00:04:29.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.383 "dma_device_type": 2 00:04:29.383 } 00:04:29.383 ], 00:04:29.383 "driver_specific": { 00:04:29.383 "passthru": { 00:04:29.383 "name": "Passthru0", 00:04:29.383 "base_bdev_name": "Malloc0" 00:04:29.383 } 00:04:29.383 } 00:04:29.383 } 00:04:29.383 ]' 00:04:29.383 02:24:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:29.383 02:24:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:29.383 02:24:37 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:29.383 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.383 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.383 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.383 02:24:37 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:29.383 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.383 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.383 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.383 02:24:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:29.383 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.383 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.383 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.383 02:24:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:29.383 02:24:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:29.383 02:24:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:29.383 00:04:29.383 real 0m0.262s 00:04:29.383 user 0m0.152s 00:04:29.383 sys 0m0.024s 00:04:29.383 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.383 02:24:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.383 ************************************ 00:04:29.383 END TEST rpc_integrity 00:04:29.383 ************************************ 00:04:29.383 02:24:37 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:29.383 02:24:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.383 02:24:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.383 02:24:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.383 ************************************ 00:04:29.383 START TEST rpc_plugins 00:04:29.383 ************************************ 00:04:29.383 02:24:37 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:29.383 02:24:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:29.383 02:24:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.383 02:24:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.383 02:24:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.383 02:24:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:29.383 02:24:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:29.383 02:24:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.383 02:24:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.383 02:24:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.383 02:24:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:29.383 { 00:04:29.383 "name": "Malloc1", 00:04:29.383 "aliases": [ 00:04:29.383 "d3d454c7-0fd9-4257-a996-9006fa6f1f23" 00:04:29.383 ], 00:04:29.383 "product_name": "Malloc disk", 00:04:29.383 "block_size": 4096, 00:04:29.383 "num_blocks": 256, 00:04:29.383 "uuid": "d3d454c7-0fd9-4257-a996-9006fa6f1f23", 00:04:29.383 "assigned_rate_limits": { 00:04:29.383 "rw_ios_per_sec": 0, 00:04:29.383 "rw_mbytes_per_sec": 0, 00:04:29.383 "r_mbytes_per_sec": 0, 00:04:29.383 "w_mbytes_per_sec": 0 00:04:29.383 }, 00:04:29.383 "claimed": false, 00:04:29.383 "zoned": false, 00:04:29.383 "supported_io_types": { 00:04:29.383 "read": true, 00:04:29.383 "write": true, 00:04:29.383 "unmap": true, 00:04:29.383 "flush": true, 00:04:29.383 "reset": true, 00:04:29.383 "nvme_admin": false, 00:04:29.383 "nvme_io": false, 00:04:29.383 "nvme_io_md": false, 00:04:29.383 "write_zeroes": true, 00:04:29.383 "zcopy": true, 00:04:29.383 "get_zone_info": false, 00:04:29.383 "zone_management": false, 00:04:29.383 "zone_append": false, 00:04:29.383 "compare": false, 00:04:29.383 "compare_and_write": false, 00:04:29.383 "abort": true, 00:04:29.383 "seek_hole": false, 00:04:29.383 "seek_data": false, 00:04:29.383 "copy": true, 00:04:29.383 "nvme_iov_md": false 00:04:29.383 }, 00:04:29.383 "memory_domains": [ 00:04:29.383 { 00:04:29.383 "dma_device_id": "system", 00:04:29.383 "dma_device_type": 1 00:04:29.383 }, 00:04:29.383 { 00:04:29.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.383 "dma_device_type": 2 00:04:29.383 } 00:04:29.383 ], 00:04:29.383 "driver_specific": {} 00:04:29.383 } 00:04:29.383 ]' 00:04:29.384 02:24:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:29.642 02:24:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:29.642 02:24:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:29.642 02:24:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.642 02:24:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.642 02:24:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.642 02:24:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:29.642 02:24:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.642 02:24:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.642 02:24:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.642 02:24:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:29.642 02:24:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:29.642 02:24:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:29.642 00:04:29.642 real 0m0.120s 00:04:29.642 user 0m0.073s 00:04:29.642 sys 0m0.012s 00:04:29.642 02:24:37 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.642 02:24:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.642 ************************************ 00:04:29.642 END TEST rpc_plugins 00:04:29.642 ************************************ 00:04:29.642 02:24:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:29.642 02:24:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.642 02:24:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.642 02:24:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.642 ************************************ 00:04:29.642 START TEST rpc_trace_cmd_test 00:04:29.642 ************************************ 00:04:29.642 02:24:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:29.642 02:24:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:29.642 02:24:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:29.642 02:24:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.642 02:24:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:29.642 02:24:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.642 02:24:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:29.642 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2816123", 00:04:29.642 "tpoint_group_mask": "0x8", 00:04:29.642 "iscsi_conn": { 00:04:29.642 "mask": "0x2", 00:04:29.642 "tpoint_mask": "0x0" 00:04:29.642 }, 00:04:29.642 "scsi": { 00:04:29.642 "mask": "0x4", 00:04:29.642 "tpoint_mask": "0x0" 00:04:29.642 }, 00:04:29.642 "bdev": { 00:04:29.642 "mask": "0x8", 00:04:29.642 "tpoint_mask": "0xffffffffffffffff" 00:04:29.642 }, 00:04:29.642 "nvmf_rdma": { 00:04:29.642 "mask": "0x10", 00:04:29.642 "tpoint_mask": "0x0" 00:04:29.642 }, 00:04:29.642 "nvmf_tcp": { 00:04:29.642 "mask": "0x20", 00:04:29.642 "tpoint_mask": "0x0" 00:04:29.642 }, 00:04:29.642 "ftl": { 00:04:29.642 "mask": "0x40", 00:04:29.642 "tpoint_mask": "0x0" 00:04:29.642 }, 00:04:29.642 "blobfs": { 00:04:29.642 "mask": "0x80", 00:04:29.642 "tpoint_mask": "0x0" 00:04:29.642 }, 00:04:29.642 "dsa": { 00:04:29.642 "mask": "0x200", 00:04:29.642 "tpoint_mask": "0x0" 00:04:29.642 }, 00:04:29.642 "thread": { 00:04:29.642 "mask": "0x400", 00:04:29.642 "tpoint_mask": "0x0" 00:04:29.642 }, 00:04:29.642 "nvme_pcie": { 00:04:29.642 "mask": "0x800", 00:04:29.642 "tpoint_mask": "0x0" 00:04:29.642 }, 00:04:29.642 "iaa": { 00:04:29.642 "mask": "0x1000", 00:04:29.642 "tpoint_mask": "0x0" 00:04:29.642 }, 00:04:29.642 "nvme_tcp": { 00:04:29.642 "mask": "0x2000", 00:04:29.642 "tpoint_mask": "0x0" 00:04:29.642 }, 00:04:29.642 "bdev_nvme": { 00:04:29.642 "mask": "0x4000", 00:04:29.642 "tpoint_mask": "0x0" 00:04:29.642 }, 00:04:29.642 "sock": { 00:04:29.642 "mask": "0x8000", 00:04:29.642 "tpoint_mask": "0x0" 00:04:29.642 }, 00:04:29.642 "blob": { 00:04:29.642 "mask": "0x10000", 00:04:29.642 "tpoint_mask": "0x0" 00:04:29.642 }, 00:04:29.642 "bdev_raid": { 00:04:29.642 "mask": "0x20000", 00:04:29.642 "tpoint_mask": "0x0" 00:04:29.642 }, 00:04:29.642 "scheduler": { 00:04:29.642 "mask": "0x40000", 00:04:29.642 "tpoint_mask": "0x0" 00:04:29.642 } 00:04:29.642 }' 00:04:29.642 02:24:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:29.642 02:24:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:29.642 02:24:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:29.642 02:24:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:29.642 02:24:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:29.642 02:24:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:29.642 02:24:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:29.901 02:24:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:29.901 02:24:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:29.901 02:24:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:29.901 00:04:29.901 real 0m0.203s 00:04:29.901 user 0m0.184s 00:04:29.901 sys 0m0.012s 00:04:29.901 02:24:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.901 02:24:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:29.901 ************************************ 00:04:29.901 END TEST rpc_trace_cmd_test 00:04:29.901 ************************************ 00:04:29.901 02:24:38 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:29.901 02:24:38 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:29.901 02:24:38 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:29.901 02:24:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.901 02:24:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.901 02:24:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.901 ************************************ 00:04:29.901 START TEST rpc_daemon_integrity 00:04:29.901 ************************************ 00:04:29.901 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:29.901 02:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:29.901 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.901 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.901 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.901 02:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:29.901 02:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:29.901 02:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:29.901 02:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:29.901 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.901 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.901 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.901 02:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:29.901 02:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:29.901 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.901 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.901 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.902 02:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:29.902 { 00:04:29.902 "name": "Malloc2", 00:04:29.902 "aliases": [ 00:04:29.902 "557fa057-a2dd-4536-b189-891e5b23356a" 00:04:29.902 ], 00:04:29.902 "product_name": "Malloc disk", 00:04:29.902 "block_size": 512, 00:04:29.902 "num_blocks": 16384, 00:04:29.902 "uuid": "557fa057-a2dd-4536-b189-891e5b23356a", 00:04:29.902 "assigned_rate_limits": { 00:04:29.902 "rw_ios_per_sec": 0, 00:04:29.902 "rw_mbytes_per_sec": 0, 00:04:29.902 "r_mbytes_per_sec": 0, 00:04:29.902 "w_mbytes_per_sec": 0 00:04:29.902 }, 00:04:29.902 "claimed": false, 00:04:29.902 "zoned": false, 00:04:29.902 "supported_io_types": { 00:04:29.902 "read": true, 00:04:29.902 "write": true, 00:04:29.902 "unmap": true, 00:04:29.902 "flush": true, 00:04:29.902 "reset": true, 00:04:29.902 "nvme_admin": false, 00:04:29.902 "nvme_io": false, 00:04:29.902 "nvme_io_md": false, 00:04:29.902 "write_zeroes": true, 00:04:29.902 "zcopy": true, 00:04:29.902 "get_zone_info": false, 00:04:29.902 "zone_management": false, 00:04:29.902 "zone_append": false, 00:04:29.902 "compare": false, 00:04:29.902 "compare_and_write": false, 00:04:29.902 "abort": true, 00:04:29.902 "seek_hole": false, 00:04:29.902 "seek_data": false, 00:04:29.902 "copy": true, 00:04:29.902 "nvme_iov_md": false 00:04:29.902 }, 00:04:29.902 "memory_domains": [ 00:04:29.902 { 00:04:29.902 "dma_device_id": "system", 00:04:29.902 "dma_device_type": 1 00:04:29.902 }, 00:04:29.902 { 00:04:29.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.902 "dma_device_type": 2 00:04:29.902 } 00:04:29.902 ], 00:04:29.902 "driver_specific": {} 00:04:29.902 } 00:04:29.902 ]' 00:04:29.902 02:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:29.902 02:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:29.902 02:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:29.902 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.902 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.902 [2024-11-17 02:24:38.339823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:29.902 [2024-11-17 02:24:38.339882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:29.902 [2024-11-17 02:24:38.339927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023a80 00:04:29.902 [2024-11-17 02:24:38.339952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:29.902 [2024-11-17 02:24:38.342742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:29.902 [2024-11-17 02:24:38.342778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:29.902 Passthru0 00:04:29.902 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.902 02:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:29.902 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.902 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.902 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.902 02:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:29.902 { 00:04:29.902 "name": "Malloc2", 00:04:29.902 "aliases": [ 00:04:29.902 "557fa057-a2dd-4536-b189-891e5b23356a" 00:04:29.902 ], 00:04:30.160 "product_name": "Malloc disk", 00:04:30.160 "block_size": 512, 00:04:30.160 "num_blocks": 16384, 00:04:30.160 "uuid": "557fa057-a2dd-4536-b189-891e5b23356a", 00:04:30.160 "assigned_rate_limits": { 00:04:30.160 "rw_ios_per_sec": 0, 00:04:30.160 "rw_mbytes_per_sec": 0, 00:04:30.160 "r_mbytes_per_sec": 0, 00:04:30.160 "w_mbytes_per_sec": 0 00:04:30.161 }, 00:04:30.161 "claimed": true, 00:04:30.161 "claim_type": "exclusive_write", 00:04:30.161 "zoned": false, 00:04:30.161 "supported_io_types": { 00:04:30.161 "read": true, 00:04:30.161 "write": true, 00:04:30.161 "unmap": true, 00:04:30.161 "flush": true, 00:04:30.161 "reset": true, 00:04:30.161 "nvme_admin": false, 00:04:30.161 "nvme_io": false, 00:04:30.161 "nvme_io_md": false, 00:04:30.161 "write_zeroes": true, 00:04:30.161 "zcopy": true, 00:04:30.161 "get_zone_info": false, 00:04:30.161 "zone_management": false, 00:04:30.161 "zone_append": false, 00:04:30.161 "compare": false, 00:04:30.161 "compare_and_write": false, 00:04:30.161 "abort": true, 00:04:30.161 "seek_hole": false, 00:04:30.161 "seek_data": false, 00:04:30.161 "copy": true, 00:04:30.161 "nvme_iov_md": false 00:04:30.161 }, 00:04:30.161 "memory_domains": [ 00:04:30.161 { 00:04:30.161 "dma_device_id": "system", 00:04:30.161 "dma_device_type": 1 00:04:30.161 }, 00:04:30.161 { 00:04:30.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.161 "dma_device_type": 2 00:04:30.161 } 00:04:30.161 ], 00:04:30.161 "driver_specific": {} 00:04:30.161 }, 00:04:30.161 { 00:04:30.161 "name": "Passthru0", 00:04:30.161 "aliases": [ 00:04:30.161 "07744903-10cf-5e36-88bd-3bbba0775a4c" 00:04:30.161 ], 00:04:30.161 "product_name": "passthru", 00:04:30.161 "block_size": 512, 00:04:30.161 "num_blocks": 16384, 00:04:30.161 "uuid": "07744903-10cf-5e36-88bd-3bbba0775a4c", 00:04:30.161 "assigned_rate_limits": { 00:04:30.161 "rw_ios_per_sec": 0, 00:04:30.161 "rw_mbytes_per_sec": 0, 00:04:30.161 "r_mbytes_per_sec": 0, 00:04:30.161 "w_mbytes_per_sec": 0 00:04:30.161 }, 00:04:30.161 "claimed": false, 00:04:30.161 "zoned": false, 00:04:30.161 "supported_io_types": { 00:04:30.161 "read": true, 00:04:30.161 "write": true, 00:04:30.161 "unmap": true, 00:04:30.161 "flush": true, 00:04:30.161 "reset": true, 00:04:30.161 "nvme_admin": false, 00:04:30.161 "nvme_io": false, 00:04:30.161 "nvme_io_md": false, 00:04:30.161 "write_zeroes": true, 00:04:30.161 "zcopy": true, 00:04:30.161 "get_zone_info": false, 00:04:30.161 "zone_management": false, 00:04:30.161 "zone_append": false, 00:04:30.161 "compare": false, 00:04:30.161 "compare_and_write": false, 00:04:30.161 "abort": true, 00:04:30.161 "seek_hole": false, 00:04:30.161 "seek_data": false, 00:04:30.161 "copy": true, 00:04:30.161 "nvme_iov_md": false 00:04:30.161 }, 00:04:30.161 "memory_domains": [ 00:04:30.161 { 00:04:30.161 "dma_device_id": "system", 00:04:30.161 "dma_device_type": 1 00:04:30.161 }, 00:04:30.161 { 00:04:30.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.161 "dma_device_type": 2 00:04:30.161 } 00:04:30.161 ], 00:04:30.161 "driver_specific": { 00:04:30.161 "passthru": { 00:04:30.161 "name": "Passthru0", 00:04:30.161 "base_bdev_name": "Malloc2" 00:04:30.161 } 00:04:30.161 } 00:04:30.161 } 00:04:30.161 ]' 00:04:30.161 02:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:30.161 02:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:30.161 02:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:30.161 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.161 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.161 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.161 02:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:30.161 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.161 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.161 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.161 02:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:30.161 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.161 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.161 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.161 02:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:30.161 02:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:30.161 02:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:30.161 00:04:30.161 real 0m0.263s 00:04:30.161 user 0m0.153s 00:04:30.161 sys 0m0.019s 00:04:30.161 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.161 02:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.161 ************************************ 00:04:30.161 END TEST rpc_daemon_integrity 00:04:30.161 ************************************ 00:04:30.161 02:24:38 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:30.161 02:24:38 rpc -- rpc/rpc.sh@84 -- # killprocess 2816123 00:04:30.161 02:24:38 rpc -- common/autotest_common.sh@954 -- # '[' -z 2816123 ']' 00:04:30.161 02:24:38 rpc -- common/autotest_common.sh@958 -- # kill -0 2816123 00:04:30.161 02:24:38 rpc -- common/autotest_common.sh@959 -- # uname 00:04:30.161 02:24:38 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.161 02:24:38 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2816123 00:04:30.161 02:24:38 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.161 02:24:38 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.161 02:24:38 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2816123' 00:04:30.161 killing process with pid 2816123 00:04:30.161 02:24:38 rpc -- common/autotest_common.sh@973 -- # kill 2816123 00:04:30.161 02:24:38 rpc -- common/autotest_common.sh@978 -- # wait 2816123 00:04:32.690 00:04:32.690 real 0m4.919s 00:04:32.690 user 0m5.463s 00:04:32.690 sys 0m0.816s 00:04:32.690 02:24:40 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.690 02:24:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.690 ************************************ 00:04:32.690 END TEST rpc 00:04:32.690 ************************************ 00:04:32.690 02:24:40 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:32.690 02:24:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.690 02:24:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.690 02:24:40 -- common/autotest_common.sh@10 -- # set +x 00:04:32.690 ************************************ 00:04:32.690 START TEST skip_rpc 00:04:32.690 ************************************ 00:04:32.690 02:24:41 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:32.690 * Looking for test storage... 00:04:32.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:32.690 02:24:41 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:32.690 02:24:41 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:32.690 02:24:41 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:32.690 02:24:41 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.690 02:24:41 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:32.690 02:24:41 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.690 02:24:41 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:32.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.690 --rc genhtml_branch_coverage=1 00:04:32.690 --rc genhtml_function_coverage=1 00:04:32.690 --rc genhtml_legend=1 00:04:32.690 --rc geninfo_all_blocks=1 00:04:32.690 --rc geninfo_unexecuted_blocks=1 00:04:32.690 00:04:32.690 ' 00:04:32.690 02:24:41 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:32.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.690 --rc genhtml_branch_coverage=1 00:04:32.690 --rc genhtml_function_coverage=1 00:04:32.691 --rc genhtml_legend=1 00:04:32.691 --rc geninfo_all_blocks=1 00:04:32.691 --rc geninfo_unexecuted_blocks=1 00:04:32.691 00:04:32.691 ' 00:04:32.691 02:24:41 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:32.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.691 --rc genhtml_branch_coverage=1 00:04:32.691 --rc genhtml_function_coverage=1 00:04:32.691 --rc genhtml_legend=1 00:04:32.691 --rc geninfo_all_blocks=1 00:04:32.691 --rc geninfo_unexecuted_blocks=1 00:04:32.691 00:04:32.691 ' 00:04:32.691 02:24:41 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:32.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.691 --rc genhtml_branch_coverage=1 00:04:32.691 --rc genhtml_function_coverage=1 00:04:32.691 --rc genhtml_legend=1 00:04:32.691 --rc geninfo_all_blocks=1 00:04:32.691 --rc geninfo_unexecuted_blocks=1 00:04:32.691 00:04:32.691 ' 00:04:32.691 02:24:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:32.691 02:24:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:32.691 02:24:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:32.691 02:24:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.691 02:24:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.691 02:24:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.949 ************************************ 00:04:32.949 START TEST skip_rpc 00:04:32.949 ************************************ 00:04:32.949 02:24:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:32.949 02:24:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2816866 00:04:32.949 02:24:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.949 02:24:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:32.949 02:24:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:32.949 [2024-11-17 02:24:41.281491] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:32.949 [2024-11-17 02:24:41.281650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2816866 ] 00:04:33.207 [2024-11-17 02:24:41.443886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.207 [2024-11-17 02:24:41.582779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2816866 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2816866 ']' 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2816866 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2816866 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2816866' 00:04:38.470 killing process with pid 2816866 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2816866 00:04:38.470 02:24:46 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2816866 00:04:40.370 00:04:40.370 real 0m7.437s 00:04:40.370 user 0m6.904s 00:04:40.370 sys 0m0.523s 00:04:40.370 02:24:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.370 02:24:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.370 ************************************ 00:04:40.370 END TEST skip_rpc 00:04:40.370 ************************************ 00:04:40.370 02:24:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:40.370 02:24:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.370 02:24:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.370 02:24:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.370 ************************************ 00:04:40.370 START TEST skip_rpc_with_json 00:04:40.370 ************************************ 00:04:40.370 02:24:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:40.370 02:24:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:40.370 02:24:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2817810 00:04:40.371 02:24:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.371 02:24:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.371 02:24:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2817810 00:04:40.371 02:24:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2817810 ']' 00:04:40.371 02:24:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.371 02:24:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.371 02:24:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.371 02:24:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.371 02:24:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.371 [2024-11-17 02:24:48.758841] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:40.371 [2024-11-17 02:24:48.759011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817810 ] 00:04:40.629 [2024-11-17 02:24:48.899084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.629 [2024-11-17 02:24:49.033188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.561 02:24:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.561 02:24:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:41.561 02:24:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:41.561 02:24:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.561 02:24:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.561 [2024-11-17 02:24:49.950121] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:41.561 request: 00:04:41.561 { 00:04:41.561 "trtype": "tcp", 00:04:41.561 "method": "nvmf_get_transports", 00:04:41.561 "req_id": 1 00:04:41.561 } 00:04:41.561 Got JSON-RPC error response 00:04:41.561 response: 00:04:41.561 { 00:04:41.561 "code": -19, 00:04:41.561 "message": "No such device" 00:04:41.561 } 00:04:41.561 02:24:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:41.561 02:24:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:41.561 02:24:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.561 02:24:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.561 [2024-11-17 02:24:49.958285] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:41.561 02:24:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.561 02:24:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:41.561 02:24:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.561 02:24:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.820 02:24:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.820 02:24:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:41.820 { 00:04:41.820 "subsystems": [ 00:04:41.820 { 00:04:41.820 "subsystem": "fsdev", 00:04:41.820 "config": [ 00:04:41.820 { 00:04:41.820 "method": "fsdev_set_opts", 00:04:41.820 "params": { 00:04:41.820 "fsdev_io_pool_size": 65535, 00:04:41.820 "fsdev_io_cache_size": 256 00:04:41.820 } 00:04:41.820 } 00:04:41.820 ] 00:04:41.820 }, 00:04:41.820 { 00:04:41.820 "subsystem": "keyring", 00:04:41.820 "config": [] 00:04:41.820 }, 00:04:41.820 { 00:04:41.820 "subsystem": "iobuf", 00:04:41.820 "config": [ 00:04:41.820 { 00:04:41.820 "method": "iobuf_set_options", 00:04:41.820 "params": { 00:04:41.820 "small_pool_count": 8192, 00:04:41.820 "large_pool_count": 1024, 00:04:41.820 "small_bufsize": 8192, 00:04:41.820 "large_bufsize": 135168, 00:04:41.820 "enable_numa": false 00:04:41.820 } 00:04:41.820 } 00:04:41.820 ] 00:04:41.820 }, 00:04:41.820 { 00:04:41.820 "subsystem": "sock", 00:04:41.820 "config": [ 00:04:41.820 { 00:04:41.820 "method": "sock_set_default_impl", 00:04:41.820 "params": { 00:04:41.820 "impl_name": "posix" 00:04:41.820 } 00:04:41.820 }, 00:04:41.820 { 00:04:41.820 "method": "sock_impl_set_options", 00:04:41.820 "params": { 00:04:41.820 "impl_name": "ssl", 00:04:41.820 "recv_buf_size": 4096, 00:04:41.820 "send_buf_size": 4096, 00:04:41.820 "enable_recv_pipe": true, 00:04:41.820 "enable_quickack": false, 00:04:41.820 "enable_placement_id": 0, 00:04:41.820 "enable_zerocopy_send_server": true, 00:04:41.820 "enable_zerocopy_send_client": false, 00:04:41.820 "zerocopy_threshold": 0, 00:04:41.820 "tls_version": 0, 00:04:41.820 "enable_ktls": false 00:04:41.820 } 00:04:41.820 }, 00:04:41.820 { 00:04:41.820 "method": "sock_impl_set_options", 00:04:41.820 "params": { 00:04:41.820 "impl_name": "posix", 00:04:41.820 "recv_buf_size": 2097152, 00:04:41.820 "send_buf_size": 2097152, 00:04:41.820 "enable_recv_pipe": true, 00:04:41.820 "enable_quickack": false, 00:04:41.820 "enable_placement_id": 0, 00:04:41.820 "enable_zerocopy_send_server": true, 00:04:41.820 "enable_zerocopy_send_client": false, 00:04:41.820 "zerocopy_threshold": 0, 00:04:41.820 "tls_version": 0, 00:04:41.820 "enable_ktls": false 00:04:41.820 } 00:04:41.820 } 00:04:41.820 ] 00:04:41.820 }, 00:04:41.820 { 00:04:41.820 "subsystem": "vmd", 00:04:41.820 "config": [] 00:04:41.820 }, 00:04:41.820 { 00:04:41.820 "subsystem": "accel", 00:04:41.820 "config": [ 00:04:41.820 { 00:04:41.820 "method": "accel_set_options", 00:04:41.820 "params": { 00:04:41.820 "small_cache_size": 128, 00:04:41.820 "large_cache_size": 16, 00:04:41.820 "task_count": 2048, 00:04:41.820 "sequence_count": 2048, 00:04:41.820 "buf_count": 2048 00:04:41.820 } 00:04:41.820 } 00:04:41.820 ] 00:04:41.820 }, 00:04:41.820 { 00:04:41.820 "subsystem": "bdev", 00:04:41.820 "config": [ 00:04:41.820 { 00:04:41.820 "method": "bdev_set_options", 00:04:41.820 "params": { 00:04:41.820 "bdev_io_pool_size": 65535, 00:04:41.820 "bdev_io_cache_size": 256, 00:04:41.820 "bdev_auto_examine": true, 00:04:41.821 "iobuf_small_cache_size": 128, 00:04:41.821 "iobuf_large_cache_size": 16 00:04:41.821 } 00:04:41.821 }, 00:04:41.821 { 00:04:41.821 "method": "bdev_raid_set_options", 00:04:41.821 "params": { 00:04:41.821 "process_window_size_kb": 1024, 00:04:41.821 "process_max_bandwidth_mb_sec": 0 00:04:41.821 } 00:04:41.821 }, 00:04:41.821 { 00:04:41.821 "method": "bdev_iscsi_set_options", 00:04:41.821 "params": { 00:04:41.821 "timeout_sec": 30 00:04:41.821 } 00:04:41.821 }, 00:04:41.821 { 00:04:41.821 "method": "bdev_nvme_set_options", 00:04:41.821 "params": { 00:04:41.821 "action_on_timeout": "none", 00:04:41.821 "timeout_us": 0, 00:04:41.821 "timeout_admin_us": 0, 00:04:41.821 "keep_alive_timeout_ms": 10000, 00:04:41.821 "arbitration_burst": 0, 00:04:41.821 "low_priority_weight": 0, 00:04:41.821 "medium_priority_weight": 0, 00:04:41.821 "high_priority_weight": 0, 00:04:41.821 "nvme_adminq_poll_period_us": 10000, 00:04:41.821 "nvme_ioq_poll_period_us": 0, 00:04:41.821 "io_queue_requests": 0, 00:04:41.821 "delay_cmd_submit": true, 00:04:41.821 "transport_retry_count": 4, 00:04:41.821 "bdev_retry_count": 3, 00:04:41.821 "transport_ack_timeout": 0, 00:04:41.821 "ctrlr_loss_timeout_sec": 0, 00:04:41.821 "reconnect_delay_sec": 0, 00:04:41.821 "fast_io_fail_timeout_sec": 0, 00:04:41.821 "disable_auto_failback": false, 00:04:41.821 "generate_uuids": false, 00:04:41.821 "transport_tos": 0, 00:04:41.821 "nvme_error_stat": false, 00:04:41.821 "rdma_srq_size": 0, 00:04:41.821 "io_path_stat": false, 00:04:41.821 "allow_accel_sequence": false, 00:04:41.821 "rdma_max_cq_size": 0, 00:04:41.821 "rdma_cm_event_timeout_ms": 0, 00:04:41.821 "dhchap_digests": [ 00:04:41.821 "sha256", 00:04:41.821 "sha384", 00:04:41.821 "sha512" 00:04:41.821 ], 00:04:41.821 "dhchap_dhgroups": [ 00:04:41.821 "null", 00:04:41.821 "ffdhe2048", 00:04:41.821 "ffdhe3072", 00:04:41.821 "ffdhe4096", 00:04:41.821 "ffdhe6144", 00:04:41.821 "ffdhe8192" 00:04:41.821 ] 00:04:41.821 } 00:04:41.821 }, 00:04:41.821 { 00:04:41.821 "method": "bdev_nvme_set_hotplug", 00:04:41.821 "params": { 00:04:41.821 "period_us": 100000, 00:04:41.821 "enable": false 00:04:41.821 } 00:04:41.821 }, 00:04:41.821 { 00:04:41.821 "method": "bdev_wait_for_examine" 00:04:41.821 } 00:04:41.821 ] 00:04:41.821 }, 00:04:41.821 { 00:04:41.821 "subsystem": "scsi", 00:04:41.821 "config": null 00:04:41.821 }, 00:04:41.821 { 00:04:41.821 "subsystem": "scheduler", 00:04:41.821 "config": [ 00:04:41.821 { 00:04:41.821 "method": "framework_set_scheduler", 00:04:41.821 "params": { 00:04:41.821 "name": "static" 00:04:41.821 } 00:04:41.821 } 00:04:41.821 ] 00:04:41.821 }, 00:04:41.821 { 00:04:41.821 "subsystem": "vhost_scsi", 00:04:41.821 "config": [] 00:04:41.821 }, 00:04:41.821 { 00:04:41.821 "subsystem": "vhost_blk", 00:04:41.821 "config": [] 00:04:41.821 }, 00:04:41.821 { 00:04:41.821 "subsystem": "ublk", 00:04:41.821 "config": [] 00:04:41.821 }, 00:04:41.821 { 00:04:41.821 "subsystem": "nbd", 00:04:41.821 "config": [] 00:04:41.821 }, 00:04:41.821 { 00:04:41.821 "subsystem": "nvmf", 00:04:41.821 "config": [ 00:04:41.821 { 00:04:41.821 "method": "nvmf_set_config", 00:04:41.821 "params": { 00:04:41.821 "discovery_filter": "match_any", 00:04:41.821 "admin_cmd_passthru": { 00:04:41.821 "identify_ctrlr": false 00:04:41.821 }, 00:04:41.821 "dhchap_digests": [ 00:04:41.821 "sha256", 00:04:41.821 "sha384", 00:04:41.821 "sha512" 00:04:41.821 ], 00:04:41.821 "dhchap_dhgroups": [ 00:04:41.821 "null", 00:04:41.821 "ffdhe2048", 00:04:41.821 "ffdhe3072", 00:04:41.821 "ffdhe4096", 00:04:41.821 "ffdhe6144", 00:04:41.821 "ffdhe8192" 00:04:41.821 ] 00:04:41.821 } 00:04:41.821 }, 00:04:41.821 { 00:04:41.821 "method": "nvmf_set_max_subsystems", 00:04:41.821 "params": { 00:04:41.821 "max_subsystems": 1024 00:04:41.821 } 00:04:41.821 }, 00:04:41.821 { 00:04:41.821 "method": "nvmf_set_crdt", 00:04:41.821 "params": { 00:04:41.821 "crdt1": 0, 00:04:41.821 "crdt2": 0, 00:04:41.821 "crdt3": 0 00:04:41.821 } 00:04:41.821 }, 00:04:41.821 { 00:04:41.821 "method": "nvmf_create_transport", 00:04:41.821 "params": { 00:04:41.821 "trtype": "TCP", 00:04:41.821 "max_queue_depth": 128, 00:04:41.821 "max_io_qpairs_per_ctrlr": 127, 00:04:41.821 "in_capsule_data_size": 4096, 00:04:41.821 "max_io_size": 131072, 00:04:41.821 "io_unit_size": 131072, 00:04:41.821 "max_aq_depth": 128, 00:04:41.821 "num_shared_buffers": 511, 00:04:41.821 "buf_cache_size": 4294967295, 00:04:41.821 "dif_insert_or_strip": false, 00:04:41.821 "zcopy": false, 00:04:41.821 "c2h_success": true, 00:04:41.821 "sock_priority": 0, 00:04:41.821 "abort_timeout_sec": 1, 00:04:41.821 "ack_timeout": 0, 00:04:41.821 "data_wr_pool_size": 0 00:04:41.821 } 00:04:41.821 } 00:04:41.821 ] 00:04:41.821 }, 00:04:41.821 { 00:04:41.821 "subsystem": "iscsi", 00:04:41.821 "config": [ 00:04:41.821 { 00:04:41.821 "method": "iscsi_set_options", 00:04:41.821 "params": { 00:04:41.821 "node_base": "iqn.2016-06.io.spdk", 00:04:41.821 "max_sessions": 128, 00:04:41.821 "max_connections_per_session": 2, 00:04:41.821 "max_queue_depth": 64, 00:04:41.821 "default_time2wait": 2, 00:04:41.821 "default_time2retain": 20, 00:04:41.821 "first_burst_length": 8192, 00:04:41.821 "immediate_data": true, 00:04:41.821 "allow_duplicated_isid": false, 00:04:41.821 "error_recovery_level": 0, 00:04:41.821 "nop_timeout": 60, 00:04:41.821 "nop_in_interval": 30, 00:04:41.821 "disable_chap": false, 00:04:41.821 "require_chap": false, 00:04:41.821 "mutual_chap": false, 00:04:41.821 "chap_group": 0, 00:04:41.821 "max_large_datain_per_connection": 64, 00:04:41.821 "max_r2t_per_connection": 4, 00:04:41.821 "pdu_pool_size": 36864, 00:04:41.821 "immediate_data_pool_size": 16384, 00:04:41.821 "data_out_pool_size": 2048 00:04:41.821 } 00:04:41.821 } 00:04:41.821 ] 00:04:41.821 } 00:04:41.821 ] 00:04:41.821 } 00:04:41.821 02:24:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:41.821 02:24:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2817810 00:04:41.821 02:24:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2817810 ']' 00:04:41.821 02:24:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2817810 00:04:41.821 02:24:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:41.822 02:24:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.822 02:24:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2817810 00:04:41.822 02:24:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.822 02:24:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.822 02:24:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2817810' 00:04:41.822 killing process with pid 2817810 00:04:41.822 02:24:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2817810 00:04:41.822 02:24:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2817810 00:04:44.351 02:24:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2818233 00:04:44.351 02:24:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:44.351 02:24:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:49.616 02:24:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2818233 00:04:49.616 02:24:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2818233 ']' 00:04:49.616 02:24:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2818233 00:04:49.616 02:24:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:49.616 02:24:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.616 02:24:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2818233 00:04:49.616 02:24:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.616 02:24:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.616 02:24:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2818233' 00:04:49.616 killing process with pid 2818233 00:04:49.616 02:24:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2818233 00:04:49.616 02:24:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2818233 00:04:52.145 02:25:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:52.145 02:25:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:52.145 00:04:52.145 real 0m11.363s 00:04:52.145 user 0m10.853s 00:04:52.145 sys 0m1.090s 00:04:52.145 02:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.145 02:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.145 ************************************ 00:04:52.145 END TEST skip_rpc_with_json 00:04:52.145 ************************************ 00:04:52.145 02:25:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:52.146 02:25:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.146 02:25:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.146 02:25:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.146 ************************************ 00:04:52.146 START TEST skip_rpc_with_delay 00:04:52.146 ************************************ 00:04:52.146 02:25:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:52.146 02:25:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:52.146 02:25:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:52.146 02:25:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:52.146 02:25:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.146 02:25:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.146 02:25:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.146 02:25:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.146 02:25:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.146 02:25:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.146 02:25:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.146 02:25:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:52.146 02:25:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:52.146 [2024-11-17 02:25:00.177633] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:52.146 02:25:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:52.146 02:25:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:52.146 02:25:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:52.146 02:25:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:52.146 00:04:52.146 real 0m0.157s 00:04:52.146 user 0m0.083s 00:04:52.146 sys 0m0.073s 00:04:52.146 02:25:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.146 02:25:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:52.146 ************************************ 00:04:52.146 END TEST skip_rpc_with_delay 00:04:52.146 ************************************ 00:04:52.146 02:25:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:52.146 02:25:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:52.146 02:25:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:52.146 02:25:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.146 02:25:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.146 02:25:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.146 ************************************ 00:04:52.146 START TEST exit_on_failed_rpc_init 00:04:52.146 ************************************ 00:04:52.146 02:25:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:52.146 02:25:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2819215 00:04:52.146 02:25:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.146 02:25:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2819215 00:04:52.146 02:25:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2819215 ']' 00:04:52.146 02:25:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.146 02:25:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.146 02:25:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.146 02:25:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.146 02:25:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:52.146 [2024-11-17 02:25:00.384149] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:52.146 [2024-11-17 02:25:00.384282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819215 ] 00:04:52.146 [2024-11-17 02:25:00.530020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.405 [2024-11-17 02:25:00.666277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.338 02:25:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.338 02:25:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:53.338 02:25:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.338 02:25:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:53.338 02:25:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:53.338 02:25:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:53.338 02:25:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.338 02:25:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.338 02:25:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.338 02:25:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.338 02:25:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.338 02:25:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.338 02:25:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.338 02:25:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:53.338 02:25:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:53.338 [2024-11-17 02:25:01.724385] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:53.338 [2024-11-17 02:25:01.724513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819356 ] 00:04:53.596 [2024-11-17 02:25:01.867117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.596 [2024-11-17 02:25:02.003455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.596 [2024-11-17 02:25:02.003649] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:53.596 [2024-11-17 02:25:02.003691] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:53.596 [2024-11-17 02:25:02.003714] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:53.854 02:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:53.854 02:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:53.854 02:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:53.854 02:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:53.854 02:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:53.854 02:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:53.854 02:25:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:53.854 02:25:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2819215 00:04:53.854 02:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2819215 ']' 00:04:53.854 02:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2819215 00:04:53.855 02:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:53.855 02:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.855 02:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2819215 00:04:54.113 02:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.113 02:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.113 02:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2819215' 00:04:54.113 killing process with pid 2819215 00:04:54.113 02:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2819215 00:04:54.113 02:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2819215 00:04:56.644 00:04:56.644 real 0m4.470s 00:04:56.644 user 0m4.932s 00:04:56.644 sys 0m0.746s 00:04:56.644 02:25:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.644 02:25:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.644 ************************************ 00:04:56.644 END TEST exit_on_failed_rpc_init 00:04:56.644 ************************************ 00:04:56.644 02:25:04 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:56.644 00:04:56.644 real 0m23.773s 00:04:56.644 user 0m22.957s 00:04:56.644 sys 0m2.613s 00:04:56.644 02:25:04 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.644 02:25:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.644 ************************************ 00:04:56.644 END TEST skip_rpc 00:04:56.644 ************************************ 00:04:56.644 02:25:04 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:56.644 02:25:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.644 02:25:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.644 02:25:04 -- common/autotest_common.sh@10 -- # set +x 00:04:56.644 ************************************ 00:04:56.644 START TEST rpc_client 00:04:56.644 ************************************ 00:04:56.644 02:25:04 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:56.644 * Looking for test storage... 00:04:56.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:56.644 02:25:04 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:56.644 02:25:04 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:56.644 02:25:04 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:56.644 02:25:04 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.644 02:25:04 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:56.644 02:25:04 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.644 02:25:04 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:56.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.644 --rc genhtml_branch_coverage=1 00:04:56.644 --rc genhtml_function_coverage=1 00:04:56.644 --rc genhtml_legend=1 00:04:56.644 --rc geninfo_all_blocks=1 00:04:56.644 --rc geninfo_unexecuted_blocks=1 00:04:56.644 00:04:56.644 ' 00:04:56.644 02:25:04 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:56.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.645 --rc genhtml_branch_coverage=1 00:04:56.645 --rc genhtml_function_coverage=1 00:04:56.645 --rc genhtml_legend=1 00:04:56.645 --rc geninfo_all_blocks=1 00:04:56.645 --rc geninfo_unexecuted_blocks=1 00:04:56.645 00:04:56.645 ' 00:04:56.645 02:25:04 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:56.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.645 --rc genhtml_branch_coverage=1 00:04:56.645 --rc genhtml_function_coverage=1 00:04:56.645 --rc genhtml_legend=1 00:04:56.645 --rc geninfo_all_blocks=1 00:04:56.645 --rc geninfo_unexecuted_blocks=1 00:04:56.645 00:04:56.645 ' 00:04:56.645 02:25:04 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:56.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.645 --rc genhtml_branch_coverage=1 00:04:56.645 --rc genhtml_function_coverage=1 00:04:56.645 --rc genhtml_legend=1 00:04:56.645 --rc geninfo_all_blocks=1 00:04:56.645 --rc geninfo_unexecuted_blocks=1 00:04:56.645 00:04:56.645 ' 00:04:56.645 02:25:04 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:56.645 OK 00:04:56.645 02:25:05 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:56.645 00:04:56.645 real 0m0.193s 00:04:56.645 user 0m0.121s 00:04:56.645 sys 0m0.081s 00:04:56.645 02:25:05 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.645 02:25:05 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:56.645 ************************************ 00:04:56.645 END TEST rpc_client 00:04:56.645 ************************************ 00:04:56.645 02:25:05 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:56.645 02:25:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.645 02:25:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.645 02:25:05 -- common/autotest_common.sh@10 -- # set +x 00:04:56.645 ************************************ 00:04:56.645 START TEST json_config 00:04:56.645 ************************************ 00:04:56.645 02:25:05 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:56.645 02:25:05 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:56.645 02:25:05 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:56.645 02:25:05 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:56.905 02:25:05 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:56.905 02:25:05 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.905 02:25:05 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.905 02:25:05 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.905 02:25:05 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.905 02:25:05 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.905 02:25:05 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.905 02:25:05 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.905 02:25:05 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.905 02:25:05 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.905 02:25:05 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.905 02:25:05 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.905 02:25:05 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:56.905 02:25:05 json_config -- scripts/common.sh@345 -- # : 1 00:04:56.905 02:25:05 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.905 02:25:05 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.905 02:25:05 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:56.905 02:25:05 json_config -- scripts/common.sh@353 -- # local d=1 00:04:56.905 02:25:05 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.905 02:25:05 json_config -- scripts/common.sh@355 -- # echo 1 00:04:56.905 02:25:05 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.905 02:25:05 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:56.905 02:25:05 json_config -- scripts/common.sh@353 -- # local d=2 00:04:56.905 02:25:05 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.905 02:25:05 json_config -- scripts/common.sh@355 -- # echo 2 00:04:56.905 02:25:05 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.905 02:25:05 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.905 02:25:05 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.905 02:25:05 json_config -- scripts/common.sh@368 -- # return 0 00:04:56.905 02:25:05 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.905 02:25:05 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:56.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.905 --rc genhtml_branch_coverage=1 00:04:56.905 --rc genhtml_function_coverage=1 00:04:56.905 --rc genhtml_legend=1 00:04:56.905 --rc geninfo_all_blocks=1 00:04:56.905 --rc geninfo_unexecuted_blocks=1 00:04:56.905 00:04:56.905 ' 00:04:56.905 02:25:05 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:56.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.905 --rc genhtml_branch_coverage=1 00:04:56.905 --rc genhtml_function_coverage=1 00:04:56.905 --rc genhtml_legend=1 00:04:56.905 --rc geninfo_all_blocks=1 00:04:56.905 --rc geninfo_unexecuted_blocks=1 00:04:56.905 00:04:56.905 ' 00:04:56.905 02:25:05 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:56.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.905 --rc genhtml_branch_coverage=1 00:04:56.905 --rc genhtml_function_coverage=1 00:04:56.905 --rc genhtml_legend=1 00:04:56.905 --rc geninfo_all_blocks=1 00:04:56.905 --rc geninfo_unexecuted_blocks=1 00:04:56.905 00:04:56.905 ' 00:04:56.905 02:25:05 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:56.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.905 --rc genhtml_branch_coverage=1 00:04:56.905 --rc genhtml_function_coverage=1 00:04:56.905 --rc genhtml_legend=1 00:04:56.905 --rc geninfo_all_blocks=1 00:04:56.905 --rc geninfo_unexecuted_blocks=1 00:04:56.905 00:04:56.905 ' 00:04:56.905 02:25:05 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:56.905 02:25:05 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:56.905 02:25:05 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.905 02:25:05 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.905 02:25:05 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.905 02:25:05 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.905 02:25:05 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.905 02:25:05 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.905 02:25:05 json_config -- paths/export.sh@5 -- # export PATH 00:04:56.905 02:25:05 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@51 -- # : 0 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:56.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:56.905 02:25:05 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:56.905 02:25:05 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:56.905 02:25:05 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:56.905 02:25:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:56.905 02:25:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:56.906 02:25:05 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:56.906 02:25:05 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:56.906 02:25:05 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:56.906 02:25:05 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:56.906 02:25:05 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:56.906 02:25:05 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:56.906 02:25:05 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:56.906 02:25:05 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:56.906 02:25:05 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:56.906 02:25:05 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:56.906 02:25:05 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:56.906 02:25:05 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:56.906 INFO: JSON configuration test init 00:04:56.906 02:25:05 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:56.906 02:25:05 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:56.906 02:25:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:56.906 02:25:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.906 02:25:05 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:56.906 02:25:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:56.906 02:25:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.906 02:25:05 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:56.906 02:25:05 json_config -- json_config/common.sh@9 -- # local app=target 00:04:56.906 02:25:05 json_config -- json_config/common.sh@10 -- # shift 00:04:56.906 02:25:05 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:56.906 02:25:05 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:56.906 02:25:05 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:56.906 02:25:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.906 02:25:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.906 02:25:05 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2819915 00:04:56.906 02:25:05 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:56.906 Waiting for target to run... 00:04:56.906 02:25:05 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:56.906 02:25:05 json_config -- json_config/common.sh@25 -- # waitforlisten 2819915 /var/tmp/spdk_tgt.sock 00:04:56.906 02:25:05 json_config -- common/autotest_common.sh@835 -- # '[' -z 2819915 ']' 00:04:56.906 02:25:05 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.906 02:25:05 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.906 02:25:05 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.906 02:25:05 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.906 02:25:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.906 [2024-11-17 02:25:05.310200] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:56.906 [2024-11-17 02:25:05.310358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819915 ] 00:04:57.472 [2024-11-17 02:25:05.890622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.730 [2024-11-17 02:25:06.018944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.989 02:25:06 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.989 02:25:06 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:57.989 02:25:06 json_config -- json_config/common.sh@26 -- # echo '' 00:04:57.989 00:04:57.989 02:25:06 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:57.989 02:25:06 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:57.989 02:25:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:57.989 02:25:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.989 02:25:06 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:57.989 02:25:06 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:57.989 02:25:06 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:57.989 02:25:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.989 02:25:06 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:57.989 02:25:06 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:57.989 02:25:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:02.176 02:25:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.176 02:25:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:02.176 02:25:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@54 -- # sort 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:02.176 02:25:10 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:02.176 02:25:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:02.176 02:25:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.176 02:25:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:02.176 02:25:10 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:02.176 02:25:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:02.440 MallocForNvmf0 00:05:02.440 02:25:10 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:02.440 02:25:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:02.727 MallocForNvmf1 00:05:02.727 02:25:11 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:02.727 02:25:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:03.028 [2024-11-17 02:25:11.342444] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:03.028 02:25:11 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:03.028 02:25:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:03.286 02:25:11 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:03.286 02:25:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:03.544 02:25:11 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:03.544 02:25:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:03.801 02:25:12 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:03.801 02:25:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:04.059 [2024-11-17 02:25:12.426142] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:04.060 02:25:12 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:04.060 02:25:12 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:04.060 02:25:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.060 02:25:12 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:04.060 02:25:12 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:04.060 02:25:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.060 02:25:12 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:04.060 02:25:12 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:04.060 02:25:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:04.318 MallocBdevForConfigChangeCheck 00:05:04.318 02:25:12 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:04.318 02:25:12 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:04.318 02:25:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.575 02:25:12 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:04.575 02:25:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:04.833 02:25:13 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:04.833 INFO: shutting down applications... 00:05:04.833 02:25:13 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:04.833 02:25:13 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:04.833 02:25:13 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:04.833 02:25:13 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:06.731 Calling clear_iscsi_subsystem 00:05:06.731 Calling clear_nvmf_subsystem 00:05:06.731 Calling clear_nbd_subsystem 00:05:06.731 Calling clear_ublk_subsystem 00:05:06.731 Calling clear_vhost_blk_subsystem 00:05:06.731 Calling clear_vhost_scsi_subsystem 00:05:06.731 Calling clear_bdev_subsystem 00:05:06.731 02:25:14 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:06.731 02:25:14 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:06.731 02:25:14 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:06.731 02:25:14 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:06.731 02:25:14 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:06.731 02:25:14 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:06.989 02:25:15 json_config -- json_config/json_config.sh@352 -- # break 00:05:06.989 02:25:15 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:06.989 02:25:15 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:06.989 02:25:15 json_config -- json_config/common.sh@31 -- # local app=target 00:05:06.989 02:25:15 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:06.989 02:25:15 json_config -- json_config/common.sh@35 -- # [[ -n 2819915 ]] 00:05:06.990 02:25:15 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2819915 00:05:06.990 02:25:15 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:06.990 02:25:15 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.990 02:25:15 json_config -- json_config/common.sh@41 -- # kill -0 2819915 00:05:06.990 02:25:15 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.556 02:25:15 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.557 02:25:15 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.557 02:25:15 json_config -- json_config/common.sh@41 -- # kill -0 2819915 00:05:07.557 02:25:15 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:08.124 02:25:16 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:08.124 02:25:16 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.124 02:25:16 json_config -- json_config/common.sh@41 -- # kill -0 2819915 00:05:08.124 02:25:16 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:08.383 02:25:16 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:08.383 02:25:16 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.383 02:25:16 json_config -- json_config/common.sh@41 -- # kill -0 2819915 00:05:08.383 02:25:16 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:08.383 02:25:16 json_config -- json_config/common.sh@43 -- # break 00:05:08.383 02:25:16 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:08.383 02:25:16 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:08.383 SPDK target shutdown done 00:05:08.383 02:25:16 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:08.383 INFO: relaunching applications... 00:05:08.383 02:25:16 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:08.383 02:25:16 json_config -- json_config/common.sh@9 -- # local app=target 00:05:08.383 02:25:16 json_config -- json_config/common.sh@10 -- # shift 00:05:08.383 02:25:16 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:08.383 02:25:16 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:08.383 02:25:16 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:08.383 02:25:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.383 02:25:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.383 02:25:16 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2821470 00:05:08.383 02:25:16 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:08.383 02:25:16 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:08.383 Waiting for target to run... 00:05:08.383 02:25:16 json_config -- json_config/common.sh@25 -- # waitforlisten 2821470 /var/tmp/spdk_tgt.sock 00:05:08.383 02:25:16 json_config -- common/autotest_common.sh@835 -- # '[' -z 2821470 ']' 00:05:08.383 02:25:16 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:08.383 02:25:16 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.383 02:25:16 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:08.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:08.383 02:25:16 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.383 02:25:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.642 [2024-11-17 02:25:16.916563] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:08.642 [2024-11-17 02:25:16.916698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2821470 ] 00:05:09.209 [2024-11-17 02:25:17.514088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.209 [2024-11-17 02:25:17.646136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.395 [2024-11-17 02:25:21.431521] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:13.395 [2024-11-17 02:25:21.464061] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:13.395 02:25:21 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.395 02:25:21 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:13.395 02:25:21 json_config -- json_config/common.sh@26 -- # echo '' 00:05:13.395 00:05:13.395 02:25:21 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:13.395 02:25:21 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:13.395 INFO: Checking if target configuration is the same... 00:05:13.395 02:25:21 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.395 02:25:21 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:13.395 02:25:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.395 + '[' 2 -ne 2 ']' 00:05:13.395 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:13.395 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:13.395 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:13.395 +++ basename /dev/fd/62 00:05:13.395 ++ mktemp /tmp/62.XXX 00:05:13.395 + tmp_file_1=/tmp/62.V2b 00:05:13.395 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.395 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:13.395 + tmp_file_2=/tmp/spdk_tgt_config.json.hgK 00:05:13.395 + ret=0 00:05:13.395 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:13.653 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:13.653 + diff -u /tmp/62.V2b /tmp/spdk_tgt_config.json.hgK 00:05:13.653 + echo 'INFO: JSON config files are the same' 00:05:13.653 INFO: JSON config files are the same 00:05:13.653 + rm /tmp/62.V2b /tmp/spdk_tgt_config.json.hgK 00:05:13.653 + exit 0 00:05:13.653 02:25:21 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:13.653 02:25:21 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:13.653 INFO: changing configuration and checking if this can be detected... 00:05:13.654 02:25:21 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:13.654 02:25:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:13.911 02:25:22 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.911 02:25:22 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:13.911 02:25:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.911 + '[' 2 -ne 2 ']' 00:05:13.911 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:13.911 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:13.911 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:13.911 +++ basename /dev/fd/62 00:05:13.911 ++ mktemp /tmp/62.XXX 00:05:13.911 + tmp_file_1=/tmp/62.b4z 00:05:13.911 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.911 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:13.911 + tmp_file_2=/tmp/spdk_tgt_config.json.kck 00:05:13.911 + ret=0 00:05:13.911 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.477 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.477 + diff -u /tmp/62.b4z /tmp/spdk_tgt_config.json.kck 00:05:14.477 + ret=1 00:05:14.477 + echo '=== Start of file: /tmp/62.b4z ===' 00:05:14.477 + cat /tmp/62.b4z 00:05:14.477 + echo '=== End of file: /tmp/62.b4z ===' 00:05:14.477 + echo '' 00:05:14.477 + echo '=== Start of file: /tmp/spdk_tgt_config.json.kck ===' 00:05:14.477 + cat /tmp/spdk_tgt_config.json.kck 00:05:14.477 + echo '=== End of file: /tmp/spdk_tgt_config.json.kck ===' 00:05:14.477 + echo '' 00:05:14.477 + rm /tmp/62.b4z /tmp/spdk_tgt_config.json.kck 00:05:14.477 + exit 1 00:05:14.477 02:25:22 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:14.477 INFO: configuration change detected. 00:05:14.477 02:25:22 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:14.477 02:25:22 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:14.477 02:25:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.477 02:25:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.477 02:25:22 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:14.477 02:25:22 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:14.477 02:25:22 json_config -- json_config/json_config.sh@324 -- # [[ -n 2821470 ]] 00:05:14.477 02:25:22 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:14.477 02:25:22 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:14.477 02:25:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.477 02:25:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.477 02:25:22 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:14.477 02:25:22 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:14.477 02:25:22 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:14.477 02:25:22 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:14.477 02:25:22 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:14.477 02:25:22 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:14.477 02:25:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:14.477 02:25:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.477 02:25:22 json_config -- json_config/json_config.sh@330 -- # killprocess 2821470 00:05:14.477 02:25:22 json_config -- common/autotest_common.sh@954 -- # '[' -z 2821470 ']' 00:05:14.477 02:25:22 json_config -- common/autotest_common.sh@958 -- # kill -0 2821470 00:05:14.477 02:25:22 json_config -- common/autotest_common.sh@959 -- # uname 00:05:14.477 02:25:22 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.477 02:25:22 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2821470 00:05:14.477 02:25:22 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.477 02:25:22 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.477 02:25:22 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2821470' 00:05:14.477 killing process with pid 2821470 00:05:14.477 02:25:22 json_config -- common/autotest_common.sh@973 -- # kill 2821470 00:05:14.477 02:25:22 json_config -- common/autotest_common.sh@978 -- # wait 2821470 00:05:17.009 02:25:25 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:17.009 02:25:25 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:17.009 02:25:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:17.009 02:25:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.009 02:25:25 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:17.009 02:25:25 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:17.009 INFO: Success 00:05:17.009 00:05:17.009 real 0m20.162s 00:05:17.009 user 0m21.152s 00:05:17.009 sys 0m3.325s 00:05:17.009 02:25:25 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.009 02:25:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.009 ************************************ 00:05:17.009 END TEST json_config 00:05:17.009 ************************************ 00:05:17.009 02:25:25 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:17.009 02:25:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.009 02:25:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.009 02:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:17.009 ************************************ 00:05:17.009 START TEST json_config_extra_key 00:05:17.009 ************************************ 00:05:17.009 02:25:25 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:17.009 02:25:25 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:17.009 02:25:25 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:17.009 02:25:25 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:17.009 02:25:25 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.009 02:25:25 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:17.009 02:25:25 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.009 02:25:25 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:17.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.009 --rc genhtml_branch_coverage=1 00:05:17.009 --rc genhtml_function_coverage=1 00:05:17.009 --rc genhtml_legend=1 00:05:17.009 --rc geninfo_all_blocks=1 00:05:17.009 --rc geninfo_unexecuted_blocks=1 00:05:17.009 00:05:17.009 ' 00:05:17.009 02:25:25 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:17.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.009 --rc genhtml_branch_coverage=1 00:05:17.009 --rc genhtml_function_coverage=1 00:05:17.009 --rc genhtml_legend=1 00:05:17.009 --rc geninfo_all_blocks=1 00:05:17.009 --rc geninfo_unexecuted_blocks=1 00:05:17.009 00:05:17.009 ' 00:05:17.009 02:25:25 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:17.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.009 --rc genhtml_branch_coverage=1 00:05:17.009 --rc genhtml_function_coverage=1 00:05:17.009 --rc genhtml_legend=1 00:05:17.009 --rc geninfo_all_blocks=1 00:05:17.009 --rc geninfo_unexecuted_blocks=1 00:05:17.009 00:05:17.009 ' 00:05:17.009 02:25:25 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:17.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.009 --rc genhtml_branch_coverage=1 00:05:17.009 --rc genhtml_function_coverage=1 00:05:17.009 --rc genhtml_legend=1 00:05:17.009 --rc geninfo_all_blocks=1 00:05:17.009 --rc geninfo_unexecuted_blocks=1 00:05:17.009 00:05:17.010 ' 00:05:17.010 02:25:25 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:17.010 02:25:25 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:17.010 02:25:25 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.010 02:25:25 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.010 02:25:25 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.010 02:25:25 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.010 02:25:25 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.010 02:25:25 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.010 02:25:25 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:17.010 02:25:25 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:17.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:17.010 02:25:25 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:17.010 02:25:25 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:17.010 02:25:25 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:17.010 02:25:25 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:17.010 02:25:25 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:17.010 02:25:25 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:17.010 02:25:25 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:17.010 02:25:25 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:17.010 02:25:25 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:17.010 02:25:25 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:17.010 02:25:25 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:17.010 02:25:25 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:17.010 INFO: launching applications... 00:05:17.010 02:25:25 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:17.010 02:25:25 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:17.010 02:25:25 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:17.010 02:25:25 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:17.010 02:25:25 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:17.010 02:25:25 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:17.010 02:25:25 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.010 02:25:25 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.010 02:25:25 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2822537 00:05:17.010 02:25:25 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:17.010 02:25:25 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:17.010 Waiting for target to run... 00:05:17.010 02:25:25 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2822537 /var/tmp/spdk_tgt.sock 00:05:17.010 02:25:25 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2822537 ']' 00:05:17.010 02:25:25 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:17.010 02:25:25 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.010 02:25:25 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:17.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:17.010 02:25:25 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.010 02:25:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:17.268 [2024-11-17 02:25:25.514254] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:17.268 [2024-11-17 02:25:25.514416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2822537 ] 00:05:17.834 [2024-11-17 02:25:26.127721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.834 [2024-11-17 02:25:26.257228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.768 02:25:27 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.768 02:25:27 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:18.768 02:25:27 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:18.768 00:05:18.768 02:25:27 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:18.768 INFO: shutting down applications... 00:05:18.768 02:25:27 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:18.768 02:25:27 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:18.768 02:25:27 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:18.768 02:25:27 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2822537 ]] 00:05:18.768 02:25:27 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2822537 00:05:18.768 02:25:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:18.768 02:25:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.768 02:25:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822537 00:05:18.768 02:25:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:19.334 02:25:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:19.334 02:25:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.334 02:25:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822537 00:05:19.334 02:25:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:19.592 02:25:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:19.592 02:25:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.592 02:25:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822537 00:05:19.592 02:25:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.159 02:25:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.159 02:25:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.159 02:25:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822537 00:05:20.159 02:25:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.725 02:25:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.725 02:25:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.725 02:25:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822537 00:05:20.725 02:25:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.296 02:25:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.296 02:25:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.296 02:25:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822537 00:05:21.296 02:25:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.863 02:25:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.863 02:25:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.863 02:25:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822537 00:05:21.863 02:25:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:21.863 02:25:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:21.863 02:25:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:21.863 02:25:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:21.863 SPDK target shutdown done 00:05:21.863 02:25:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:21.863 Success 00:05:21.863 00:05:21.863 real 0m4.775s 00:05:21.863 user 0m4.242s 00:05:21.863 sys 0m0.850s 00:05:21.863 02:25:30 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.863 02:25:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:21.863 ************************************ 00:05:21.863 END TEST json_config_extra_key 00:05:21.863 ************************************ 00:05:21.863 02:25:30 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.863 02:25:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.863 02:25:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.863 02:25:30 -- common/autotest_common.sh@10 -- # set +x 00:05:21.863 ************************************ 00:05:21.863 START TEST alias_rpc 00:05:21.863 ************************************ 00:05:21.863 02:25:30 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.863 * Looking for test storage... 00:05:21.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:21.863 02:25:30 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.863 02:25:30 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.863 02:25:30 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.863 02:25:30 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.863 02:25:30 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:21.863 02:25:30 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.863 02:25:30 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.863 --rc genhtml_branch_coverage=1 00:05:21.863 --rc genhtml_function_coverage=1 00:05:21.863 --rc genhtml_legend=1 00:05:21.863 --rc geninfo_all_blocks=1 00:05:21.863 --rc geninfo_unexecuted_blocks=1 00:05:21.863 00:05:21.863 ' 00:05:21.863 02:25:30 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.863 --rc genhtml_branch_coverage=1 00:05:21.863 --rc genhtml_function_coverage=1 00:05:21.863 --rc genhtml_legend=1 00:05:21.863 --rc geninfo_all_blocks=1 00:05:21.863 --rc geninfo_unexecuted_blocks=1 00:05:21.863 00:05:21.863 ' 00:05:21.863 02:25:30 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.863 --rc genhtml_branch_coverage=1 00:05:21.863 --rc genhtml_function_coverage=1 00:05:21.863 --rc genhtml_legend=1 00:05:21.863 --rc geninfo_all_blocks=1 00:05:21.863 --rc geninfo_unexecuted_blocks=1 00:05:21.863 00:05:21.863 ' 00:05:21.863 02:25:30 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.863 --rc genhtml_branch_coverage=1 00:05:21.863 --rc genhtml_function_coverage=1 00:05:21.863 --rc genhtml_legend=1 00:05:21.863 --rc geninfo_all_blocks=1 00:05:21.863 --rc geninfo_unexecuted_blocks=1 00:05:21.863 00:05:21.863 ' 00:05:21.863 02:25:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:21.863 02:25:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2823252 00:05:21.863 02:25:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.863 02:25:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2823252 00:05:21.863 02:25:30 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2823252 ']' 00:05:21.863 02:25:30 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.863 02:25:30 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.863 02:25:30 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.863 02:25:30 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.863 02:25:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.122 [2024-11-17 02:25:30.329337] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:22.122 [2024-11-17 02:25:30.329472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2823252 ] 00:05:22.122 [2024-11-17 02:25:30.475046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.380 [2024-11-17 02:25:30.613164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.315 02:25:31 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.315 02:25:31 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:23.315 02:25:31 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:23.573 02:25:31 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2823252 00:05:23.573 02:25:31 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2823252 ']' 00:05:23.573 02:25:31 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2823252 00:05:23.573 02:25:31 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:23.573 02:25:31 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.573 02:25:31 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2823252 00:05:23.573 02:25:31 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.573 02:25:31 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.573 02:25:31 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2823252' 00:05:23.573 killing process with pid 2823252 00:05:23.573 02:25:31 alias_rpc -- common/autotest_common.sh@973 -- # kill 2823252 00:05:23.573 02:25:31 alias_rpc -- common/autotest_common.sh@978 -- # wait 2823252 00:05:26.103 00:05:26.103 real 0m4.249s 00:05:26.103 user 0m4.400s 00:05:26.103 sys 0m0.663s 00:05:26.103 02:25:34 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.103 02:25:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.103 ************************************ 00:05:26.103 END TEST alias_rpc 00:05:26.103 ************************************ 00:05:26.103 02:25:34 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:26.103 02:25:34 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:26.103 02:25:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.103 02:25:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.103 02:25:34 -- common/autotest_common.sh@10 -- # set +x 00:05:26.103 ************************************ 00:05:26.103 START TEST spdkcli_tcp 00:05:26.103 ************************************ 00:05:26.103 02:25:34 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:26.103 * Looking for test storage... 00:05:26.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:26.103 02:25:34 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.103 02:25:34 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.103 02:25:34 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.103 02:25:34 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.103 02:25:34 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:26.103 02:25:34 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.103 02:25:34 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.103 --rc genhtml_branch_coverage=1 00:05:26.103 --rc genhtml_function_coverage=1 00:05:26.103 --rc genhtml_legend=1 00:05:26.103 --rc geninfo_all_blocks=1 00:05:26.103 --rc geninfo_unexecuted_blocks=1 00:05:26.103 00:05:26.103 ' 00:05:26.103 02:25:34 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.103 --rc genhtml_branch_coverage=1 00:05:26.103 --rc genhtml_function_coverage=1 00:05:26.103 --rc genhtml_legend=1 00:05:26.103 --rc geninfo_all_blocks=1 00:05:26.103 --rc geninfo_unexecuted_blocks=1 00:05:26.103 00:05:26.103 ' 00:05:26.103 02:25:34 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.103 --rc genhtml_branch_coverage=1 00:05:26.103 --rc genhtml_function_coverage=1 00:05:26.103 --rc genhtml_legend=1 00:05:26.103 --rc geninfo_all_blocks=1 00:05:26.103 --rc geninfo_unexecuted_blocks=1 00:05:26.103 00:05:26.103 ' 00:05:26.103 02:25:34 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.103 --rc genhtml_branch_coverage=1 00:05:26.103 --rc genhtml_function_coverage=1 00:05:26.103 --rc genhtml_legend=1 00:05:26.103 --rc geninfo_all_blocks=1 00:05:26.103 --rc geninfo_unexecuted_blocks=1 00:05:26.103 00:05:26.103 ' 00:05:26.103 02:25:34 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:26.103 02:25:34 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:26.103 02:25:34 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:26.103 02:25:34 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:26.103 02:25:34 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:26.103 02:25:34 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:26.103 02:25:34 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:26.103 02:25:34 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.103 02:25:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.103 02:25:34 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2823742 00:05:26.103 02:25:34 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:26.103 02:25:34 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2823742 00:05:26.103 02:25:34 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2823742 ']' 00:05:26.103 02:25:34 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.103 02:25:34 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.103 02:25:34 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.103 02:25:34 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.103 02:25:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.362 [2024-11-17 02:25:34.641007] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:26.362 [2024-11-17 02:25:34.641176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2823742 ] 00:05:26.362 [2024-11-17 02:25:34.778299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.619 [2024-11-17 02:25:34.916402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.619 [2024-11-17 02:25:34.916404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.554 02:25:35 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.554 02:25:35 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:27.554 02:25:35 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2823990 00:05:27.554 02:25:35 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:27.554 02:25:35 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:27.813 [ 00:05:27.813 "bdev_malloc_delete", 00:05:27.813 "bdev_malloc_create", 00:05:27.813 "bdev_null_resize", 00:05:27.813 "bdev_null_delete", 00:05:27.813 "bdev_null_create", 00:05:27.813 "bdev_nvme_cuse_unregister", 00:05:27.813 "bdev_nvme_cuse_register", 00:05:27.813 "bdev_opal_new_user", 00:05:27.813 "bdev_opal_set_lock_state", 00:05:27.813 "bdev_opal_delete", 00:05:27.813 "bdev_opal_get_info", 00:05:27.813 "bdev_opal_create", 00:05:27.813 "bdev_nvme_opal_revert", 00:05:27.813 "bdev_nvme_opal_init", 00:05:27.813 "bdev_nvme_send_cmd", 00:05:27.813 "bdev_nvme_set_keys", 00:05:27.813 "bdev_nvme_get_path_iostat", 00:05:27.813 "bdev_nvme_get_mdns_discovery_info", 00:05:27.813 "bdev_nvme_stop_mdns_discovery", 00:05:27.813 "bdev_nvme_start_mdns_discovery", 00:05:27.813 "bdev_nvme_set_multipath_policy", 00:05:27.813 "bdev_nvme_set_preferred_path", 00:05:27.813 "bdev_nvme_get_io_paths", 00:05:27.813 "bdev_nvme_remove_error_injection", 00:05:27.813 "bdev_nvme_add_error_injection", 00:05:27.813 "bdev_nvme_get_discovery_info", 00:05:27.813 "bdev_nvme_stop_discovery", 00:05:27.813 "bdev_nvme_start_discovery", 00:05:27.813 "bdev_nvme_get_controller_health_info", 00:05:27.813 "bdev_nvme_disable_controller", 00:05:27.813 "bdev_nvme_enable_controller", 00:05:27.813 "bdev_nvme_reset_controller", 00:05:27.813 "bdev_nvme_get_transport_statistics", 00:05:27.813 "bdev_nvme_apply_firmware", 00:05:27.813 "bdev_nvme_detach_controller", 00:05:27.813 "bdev_nvme_get_controllers", 00:05:27.813 "bdev_nvme_attach_controller", 00:05:27.813 "bdev_nvme_set_hotplug", 00:05:27.813 "bdev_nvme_set_options", 00:05:27.813 "bdev_passthru_delete", 00:05:27.813 "bdev_passthru_create", 00:05:27.813 "bdev_lvol_set_parent_bdev", 00:05:27.813 "bdev_lvol_set_parent", 00:05:27.813 "bdev_lvol_check_shallow_copy", 00:05:27.813 "bdev_lvol_start_shallow_copy", 00:05:27.813 "bdev_lvol_grow_lvstore", 00:05:27.813 "bdev_lvol_get_lvols", 00:05:27.813 "bdev_lvol_get_lvstores", 00:05:27.813 "bdev_lvol_delete", 00:05:27.813 "bdev_lvol_set_read_only", 00:05:27.813 "bdev_lvol_resize", 00:05:27.813 "bdev_lvol_decouple_parent", 00:05:27.813 "bdev_lvol_inflate", 00:05:27.813 "bdev_lvol_rename", 00:05:27.813 "bdev_lvol_clone_bdev", 00:05:27.813 "bdev_lvol_clone", 00:05:27.813 "bdev_lvol_snapshot", 00:05:27.813 "bdev_lvol_create", 00:05:27.813 "bdev_lvol_delete_lvstore", 00:05:27.813 "bdev_lvol_rename_lvstore", 00:05:27.813 "bdev_lvol_create_lvstore", 00:05:27.813 "bdev_raid_set_options", 00:05:27.813 "bdev_raid_remove_base_bdev", 00:05:27.813 "bdev_raid_add_base_bdev", 00:05:27.813 "bdev_raid_delete", 00:05:27.813 "bdev_raid_create", 00:05:27.813 "bdev_raid_get_bdevs", 00:05:27.813 "bdev_error_inject_error", 00:05:27.813 "bdev_error_delete", 00:05:27.813 "bdev_error_create", 00:05:27.813 "bdev_split_delete", 00:05:27.813 "bdev_split_create", 00:05:27.813 "bdev_delay_delete", 00:05:27.813 "bdev_delay_create", 00:05:27.813 "bdev_delay_update_latency", 00:05:27.813 "bdev_zone_block_delete", 00:05:27.813 "bdev_zone_block_create", 00:05:27.813 "blobfs_create", 00:05:27.813 "blobfs_detect", 00:05:27.813 "blobfs_set_cache_size", 00:05:27.813 "bdev_aio_delete", 00:05:27.813 "bdev_aio_rescan", 00:05:27.813 "bdev_aio_create", 00:05:27.813 "bdev_ftl_set_property", 00:05:27.813 "bdev_ftl_get_properties", 00:05:27.813 "bdev_ftl_get_stats", 00:05:27.813 "bdev_ftl_unmap", 00:05:27.813 "bdev_ftl_unload", 00:05:27.813 "bdev_ftl_delete", 00:05:27.813 "bdev_ftl_load", 00:05:27.813 "bdev_ftl_create", 00:05:27.813 "bdev_virtio_attach_controller", 00:05:27.813 "bdev_virtio_scsi_get_devices", 00:05:27.813 "bdev_virtio_detach_controller", 00:05:27.813 "bdev_virtio_blk_set_hotplug", 00:05:27.813 "bdev_iscsi_delete", 00:05:27.813 "bdev_iscsi_create", 00:05:27.813 "bdev_iscsi_set_options", 00:05:27.813 "accel_error_inject_error", 00:05:27.813 "ioat_scan_accel_module", 00:05:27.813 "dsa_scan_accel_module", 00:05:27.813 "iaa_scan_accel_module", 00:05:27.813 "keyring_file_remove_key", 00:05:27.813 "keyring_file_add_key", 00:05:27.813 "keyring_linux_set_options", 00:05:27.813 "fsdev_aio_delete", 00:05:27.813 "fsdev_aio_create", 00:05:27.813 "iscsi_get_histogram", 00:05:27.813 "iscsi_enable_histogram", 00:05:27.813 "iscsi_set_options", 00:05:27.813 "iscsi_get_auth_groups", 00:05:27.813 "iscsi_auth_group_remove_secret", 00:05:27.813 "iscsi_auth_group_add_secret", 00:05:27.813 "iscsi_delete_auth_group", 00:05:27.813 "iscsi_create_auth_group", 00:05:27.813 "iscsi_set_discovery_auth", 00:05:27.813 "iscsi_get_options", 00:05:27.813 "iscsi_target_node_request_logout", 00:05:27.813 "iscsi_target_node_set_redirect", 00:05:27.813 "iscsi_target_node_set_auth", 00:05:27.813 "iscsi_target_node_add_lun", 00:05:27.813 "iscsi_get_stats", 00:05:27.813 "iscsi_get_connections", 00:05:27.813 "iscsi_portal_group_set_auth", 00:05:27.813 "iscsi_start_portal_group", 00:05:27.813 "iscsi_delete_portal_group", 00:05:27.813 "iscsi_create_portal_group", 00:05:27.813 "iscsi_get_portal_groups", 00:05:27.813 "iscsi_delete_target_node", 00:05:27.813 "iscsi_target_node_remove_pg_ig_maps", 00:05:27.813 "iscsi_target_node_add_pg_ig_maps", 00:05:27.813 "iscsi_create_target_node", 00:05:27.813 "iscsi_get_target_nodes", 00:05:27.813 "iscsi_delete_initiator_group", 00:05:27.813 "iscsi_initiator_group_remove_initiators", 00:05:27.813 "iscsi_initiator_group_add_initiators", 00:05:27.813 "iscsi_create_initiator_group", 00:05:27.813 "iscsi_get_initiator_groups", 00:05:27.813 "nvmf_set_crdt", 00:05:27.813 "nvmf_set_config", 00:05:27.813 "nvmf_set_max_subsystems", 00:05:27.813 "nvmf_stop_mdns_prr", 00:05:27.813 "nvmf_publish_mdns_prr", 00:05:27.813 "nvmf_subsystem_get_listeners", 00:05:27.813 "nvmf_subsystem_get_qpairs", 00:05:27.813 "nvmf_subsystem_get_controllers", 00:05:27.813 "nvmf_get_stats", 00:05:27.813 "nvmf_get_transports", 00:05:27.813 "nvmf_create_transport", 00:05:27.813 "nvmf_get_targets", 00:05:27.813 "nvmf_delete_target", 00:05:27.813 "nvmf_create_target", 00:05:27.813 "nvmf_subsystem_allow_any_host", 00:05:27.813 "nvmf_subsystem_set_keys", 00:05:27.813 "nvmf_subsystem_remove_host", 00:05:27.813 "nvmf_subsystem_add_host", 00:05:27.813 "nvmf_ns_remove_host", 00:05:27.813 "nvmf_ns_add_host", 00:05:27.813 "nvmf_subsystem_remove_ns", 00:05:27.813 "nvmf_subsystem_set_ns_ana_group", 00:05:27.813 "nvmf_subsystem_add_ns", 00:05:27.813 "nvmf_subsystem_listener_set_ana_state", 00:05:27.813 "nvmf_discovery_get_referrals", 00:05:27.813 "nvmf_discovery_remove_referral", 00:05:27.813 "nvmf_discovery_add_referral", 00:05:27.813 "nvmf_subsystem_remove_listener", 00:05:27.813 "nvmf_subsystem_add_listener", 00:05:27.813 "nvmf_delete_subsystem", 00:05:27.813 "nvmf_create_subsystem", 00:05:27.813 "nvmf_get_subsystems", 00:05:27.813 "env_dpdk_get_mem_stats", 00:05:27.813 "nbd_get_disks", 00:05:27.813 "nbd_stop_disk", 00:05:27.813 "nbd_start_disk", 00:05:27.813 "ublk_recover_disk", 00:05:27.813 "ublk_get_disks", 00:05:27.813 "ublk_stop_disk", 00:05:27.814 "ublk_start_disk", 00:05:27.814 "ublk_destroy_target", 00:05:27.814 "ublk_create_target", 00:05:27.814 "virtio_blk_create_transport", 00:05:27.814 "virtio_blk_get_transports", 00:05:27.814 "vhost_controller_set_coalescing", 00:05:27.814 "vhost_get_controllers", 00:05:27.814 "vhost_delete_controller", 00:05:27.814 "vhost_create_blk_controller", 00:05:27.814 "vhost_scsi_controller_remove_target", 00:05:27.814 "vhost_scsi_controller_add_target", 00:05:27.814 "vhost_start_scsi_controller", 00:05:27.814 "vhost_create_scsi_controller", 00:05:27.814 "thread_set_cpumask", 00:05:27.814 "scheduler_set_options", 00:05:27.814 "framework_get_governor", 00:05:27.814 "framework_get_scheduler", 00:05:27.814 "framework_set_scheduler", 00:05:27.814 "framework_get_reactors", 00:05:27.814 "thread_get_io_channels", 00:05:27.814 "thread_get_pollers", 00:05:27.814 "thread_get_stats", 00:05:27.814 "framework_monitor_context_switch", 00:05:27.814 "spdk_kill_instance", 00:05:27.814 "log_enable_timestamps", 00:05:27.814 "log_get_flags", 00:05:27.814 "log_clear_flag", 00:05:27.814 "log_set_flag", 00:05:27.814 "log_get_level", 00:05:27.814 "log_set_level", 00:05:27.814 "log_get_print_level", 00:05:27.814 "log_set_print_level", 00:05:27.814 "framework_enable_cpumask_locks", 00:05:27.814 "framework_disable_cpumask_locks", 00:05:27.814 "framework_wait_init", 00:05:27.814 "framework_start_init", 00:05:27.814 "scsi_get_devices", 00:05:27.814 "bdev_get_histogram", 00:05:27.814 "bdev_enable_histogram", 00:05:27.814 "bdev_set_qos_limit", 00:05:27.814 "bdev_set_qd_sampling_period", 00:05:27.814 "bdev_get_bdevs", 00:05:27.814 "bdev_reset_iostat", 00:05:27.814 "bdev_get_iostat", 00:05:27.814 "bdev_examine", 00:05:27.814 "bdev_wait_for_examine", 00:05:27.814 "bdev_set_options", 00:05:27.814 "accel_get_stats", 00:05:27.814 "accel_set_options", 00:05:27.814 "accel_set_driver", 00:05:27.814 "accel_crypto_key_destroy", 00:05:27.814 "accel_crypto_keys_get", 00:05:27.814 "accel_crypto_key_create", 00:05:27.814 "accel_assign_opc", 00:05:27.814 "accel_get_module_info", 00:05:27.814 "accel_get_opc_assignments", 00:05:27.814 "vmd_rescan", 00:05:27.814 "vmd_remove_device", 00:05:27.814 "vmd_enable", 00:05:27.814 "sock_get_default_impl", 00:05:27.814 "sock_set_default_impl", 00:05:27.814 "sock_impl_set_options", 00:05:27.814 "sock_impl_get_options", 00:05:27.814 "iobuf_get_stats", 00:05:27.814 "iobuf_set_options", 00:05:27.814 "keyring_get_keys", 00:05:27.814 "framework_get_pci_devices", 00:05:27.814 "framework_get_config", 00:05:27.814 "framework_get_subsystems", 00:05:27.814 "fsdev_set_opts", 00:05:27.814 "fsdev_get_opts", 00:05:27.814 "trace_get_info", 00:05:27.814 "trace_get_tpoint_group_mask", 00:05:27.814 "trace_disable_tpoint_group", 00:05:27.814 "trace_enable_tpoint_group", 00:05:27.814 "trace_clear_tpoint_mask", 00:05:27.814 "trace_set_tpoint_mask", 00:05:27.814 "notify_get_notifications", 00:05:27.814 "notify_get_types", 00:05:27.814 "spdk_get_version", 00:05:27.814 "rpc_get_methods" 00:05:27.814 ] 00:05:27.814 02:25:36 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:27.814 02:25:36 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:27.814 02:25:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.814 02:25:36 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:27.814 02:25:36 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2823742 00:05:27.814 02:25:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2823742 ']' 00:05:27.814 02:25:36 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2823742 00:05:27.814 02:25:36 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:27.814 02:25:36 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.814 02:25:36 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2823742 00:05:27.814 02:25:36 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.814 02:25:36 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.814 02:25:36 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2823742' 00:05:27.814 killing process with pid 2823742 00:05:27.814 02:25:36 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2823742 00:05:27.814 02:25:36 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2823742 00:05:30.344 00:05:30.344 real 0m4.204s 00:05:30.344 user 0m7.647s 00:05:30.344 sys 0m0.709s 00:05:30.344 02:25:38 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.344 02:25:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.344 ************************************ 00:05:30.344 END TEST spdkcli_tcp 00:05:30.344 ************************************ 00:05:30.344 02:25:38 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:30.344 02:25:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.344 02:25:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.344 02:25:38 -- common/autotest_common.sh@10 -- # set +x 00:05:30.344 ************************************ 00:05:30.344 START TEST dpdk_mem_utility 00:05:30.344 ************************************ 00:05:30.344 02:25:38 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:30.344 * Looking for test storage... 00:05:30.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:30.344 02:25:38 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.344 02:25:38 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.344 02:25:38 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.344 02:25:38 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.344 02:25:38 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:30.344 02:25:38 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.344 02:25:38 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.344 --rc genhtml_branch_coverage=1 00:05:30.344 --rc genhtml_function_coverage=1 00:05:30.344 --rc genhtml_legend=1 00:05:30.344 --rc geninfo_all_blocks=1 00:05:30.344 --rc geninfo_unexecuted_blocks=1 00:05:30.344 00:05:30.344 ' 00:05:30.344 02:25:38 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.344 --rc genhtml_branch_coverage=1 00:05:30.344 --rc genhtml_function_coverage=1 00:05:30.344 --rc genhtml_legend=1 00:05:30.344 --rc geninfo_all_blocks=1 00:05:30.344 --rc geninfo_unexecuted_blocks=1 00:05:30.344 00:05:30.344 ' 00:05:30.344 02:25:38 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.344 --rc genhtml_branch_coverage=1 00:05:30.344 --rc genhtml_function_coverage=1 00:05:30.344 --rc genhtml_legend=1 00:05:30.344 --rc geninfo_all_blocks=1 00:05:30.344 --rc geninfo_unexecuted_blocks=1 00:05:30.344 00:05:30.344 ' 00:05:30.344 02:25:38 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.344 --rc genhtml_branch_coverage=1 00:05:30.344 --rc genhtml_function_coverage=1 00:05:30.344 --rc genhtml_legend=1 00:05:30.344 --rc geninfo_all_blocks=1 00:05:30.344 --rc geninfo_unexecuted_blocks=1 00:05:30.344 00:05:30.344 ' 00:05:30.344 02:25:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:30.344 02:25:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2824334 00:05:30.344 02:25:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.344 02:25:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2824334 00:05:30.344 02:25:38 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2824334 ']' 00:05:30.344 02:25:38 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.344 02:25:38 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.344 02:25:38 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.344 02:25:38 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.344 02:25:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:30.603 [2024-11-17 02:25:38.892699] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:30.603 [2024-11-17 02:25:38.892851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2824334 ] 00:05:30.603 [2024-11-17 02:25:39.026956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.863 [2024-11-17 02:25:39.157321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.799 02:25:40 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.799 02:25:40 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:31.799 02:25:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:31.799 02:25:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:31.799 02:25:40 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.799 02:25:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.799 { 00:05:31.799 "filename": "/tmp/spdk_mem_dump.txt" 00:05:31.799 } 00:05:31.799 02:25:40 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.799 02:25:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:31.799 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:31.799 1 heaps totaling size 816.000000 MiB 00:05:31.799 size: 816.000000 MiB heap id: 0 00:05:31.799 end heaps---------- 00:05:31.799 9 mempools totaling size 595.772034 MiB 00:05:31.799 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:31.799 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:31.799 size: 92.545471 MiB name: bdev_io_2824334 00:05:31.799 size: 50.003479 MiB name: msgpool_2824334 00:05:31.799 size: 36.509338 MiB name: fsdev_io_2824334 00:05:31.799 size: 21.763794 MiB name: PDU_Pool 00:05:31.799 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:31.799 size: 4.133484 MiB name: evtpool_2824334 00:05:31.799 size: 0.026123 MiB name: Session_Pool 00:05:31.799 end mempools------- 00:05:31.799 6 memzones totaling size 4.142822 MiB 00:05:31.799 size: 1.000366 MiB name: RG_ring_0_2824334 00:05:31.799 size: 1.000366 MiB name: RG_ring_1_2824334 00:05:31.799 size: 1.000366 MiB name: RG_ring_4_2824334 00:05:31.799 size: 1.000366 MiB name: RG_ring_5_2824334 00:05:31.799 size: 0.125366 MiB name: RG_ring_2_2824334 00:05:31.799 size: 0.015991 MiB name: RG_ring_3_2824334 00:05:31.799 end memzones------- 00:05:31.799 02:25:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:31.799 heap id: 0 total size: 816.000000 MiB number of busy elements: 44 number of free elements: 19 00:05:31.799 list of free elements. size: 16.857605 MiB 00:05:31.799 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:31.799 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:31.799 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:31.799 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:31.799 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:31.799 element at address: 0x200019200000 with size: 0.999329 MiB 00:05:31.799 element at address: 0x200000400000 with size: 0.998108 MiB 00:05:31.799 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:31.799 element at address: 0x200018a00000 with size: 0.959900 MiB 00:05:31.799 element at address: 0x200019500040 with size: 0.937256 MiB 00:05:31.799 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:31.799 element at address: 0x20001ac00000 with size: 0.583191 MiB 00:05:31.799 element at address: 0x200000c00000 with size: 0.495300 MiB 00:05:31.799 element at address: 0x200018e00000 with size: 0.491150 MiB 00:05:31.799 element at address: 0x200019600000 with size: 0.485657 MiB 00:05:31.799 element at address: 0x200012c00000 with size: 0.446167 MiB 00:05:31.799 element at address: 0x200028000000 with size: 0.411072 MiB 00:05:31.799 element at address: 0x200000800000 with size: 0.355286 MiB 00:05:31.799 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:05:31.799 list of standard malloc elements. size: 199.221497 MiB 00:05:31.799 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:31.799 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:31.799 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:31.799 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:31.799 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:31.799 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:31.799 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:31.799 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:31.799 element at address: 0x200012bff040 with size: 0.000427 MiB 00:05:31.799 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:05:31.799 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:31.799 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:31.799 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:31.799 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:31.799 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:05:31.799 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:31.799 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:31.799 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:31.799 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:31.799 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:31.799 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:31.799 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:31.799 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:31.799 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:05:31.799 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:05:31.799 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:05:31.799 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:05:31.799 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:05:31.799 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:05:31.799 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:31.799 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:31.799 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:31.799 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:31.799 element at address: 0x200012bff200 with size: 0.000244 MiB 00:05:31.799 element at address: 0x200012bff300 with size: 0.000244 MiB 00:05:31.799 element at address: 0x200012bff400 with size: 0.000244 MiB 00:05:31.799 element at address: 0x200012bff500 with size: 0.000244 MiB 00:05:31.799 element at address: 0x200012bff600 with size: 0.000244 MiB 00:05:31.799 element at address: 0x200012bff700 with size: 0.000244 MiB 00:05:31.799 element at address: 0x200012bff800 with size: 0.000244 MiB 00:05:31.799 element at address: 0x200012bff900 with size: 0.000244 MiB 00:05:31.799 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:31.799 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:31.799 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:31.799 list of memzone associated elements. size: 599.920898 MiB 00:05:31.799 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:31.799 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:31.799 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:31.800 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:31.800 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:31.800 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2824334_0 00:05:31.800 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:31.800 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2824334_0 00:05:31.800 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:31.800 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2824334_0 00:05:31.800 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:31.800 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:31.800 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:31.800 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:31.800 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:31.800 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2824334_0 00:05:31.800 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:31.800 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2824334 00:05:31.800 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:31.800 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2824334 00:05:31.800 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:31.800 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:31.800 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:31.800 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:31.800 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:31.800 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:31.800 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:31.800 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:31.800 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:31.800 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2824334 00:05:31.800 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:31.800 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2824334 00:05:31.800 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:31.800 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2824334 00:05:31.800 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:31.800 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2824334 00:05:31.800 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:31.800 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2824334 00:05:31.800 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:31.800 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2824334 00:05:31.800 element at address: 0x200018e7dbc0 with size: 0.500549 MiB 00:05:31.800 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:31.800 element at address: 0x200012c72380 with size: 0.500549 MiB 00:05:31.800 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:31.800 element at address: 0x20001967c540 with size: 0.250549 MiB 00:05:31.800 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:31.800 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:31.800 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2824334 00:05:31.800 element at address: 0x20000085f180 with size: 0.125549 MiB 00:05:31.800 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2824334 00:05:31.800 element at address: 0x200018af5bc0 with size: 0.031799 MiB 00:05:31.800 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:31.800 element at address: 0x2000280693c0 with size: 0.023804 MiB 00:05:31.800 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:31.800 element at address: 0x20000085af40 with size: 0.016174 MiB 00:05:31.800 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2824334 00:05:31.800 element at address: 0x20002806f540 with size: 0.002502 MiB 00:05:31.800 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:31.800 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:05:31.800 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2824334 00:05:31.800 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:31.800 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2824334 00:05:31.800 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:31.800 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2824334 00:05:31.800 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:05:31.800 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:31.800 02:25:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:31.800 02:25:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2824334 00:05:31.800 02:25:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2824334 ']' 00:05:31.800 02:25:40 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2824334 00:05:31.800 02:25:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:31.800 02:25:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.800 02:25:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2824334 00:05:31.800 02:25:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.800 02:25:40 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.800 02:25:40 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2824334' 00:05:31.800 killing process with pid 2824334 00:05:31.800 02:25:40 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2824334 00:05:31.800 02:25:40 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2824334 00:05:34.329 00:05:34.329 real 0m4.018s 00:05:34.329 user 0m4.048s 00:05:34.329 sys 0m0.642s 00:05:34.329 02:25:42 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.329 02:25:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:34.329 ************************************ 00:05:34.329 END TEST dpdk_mem_utility 00:05:34.329 ************************************ 00:05:34.329 02:25:42 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:34.329 02:25:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.329 02:25:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.329 02:25:42 -- common/autotest_common.sh@10 -- # set +x 00:05:34.329 ************************************ 00:05:34.329 START TEST event 00:05:34.329 ************************************ 00:05:34.329 02:25:42 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:34.329 * Looking for test storage... 00:05:34.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:34.329 02:25:42 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.329 02:25:42 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.329 02:25:42 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.588 02:25:42 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.588 02:25:42 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.588 02:25:42 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.588 02:25:42 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.588 02:25:42 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.588 02:25:42 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.588 02:25:42 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.588 02:25:42 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.588 02:25:42 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.588 02:25:42 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.588 02:25:42 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.588 02:25:42 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.588 02:25:42 event -- scripts/common.sh@344 -- # case "$op" in 00:05:34.588 02:25:42 event -- scripts/common.sh@345 -- # : 1 00:05:34.588 02:25:42 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.588 02:25:42 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.588 02:25:42 event -- scripts/common.sh@365 -- # decimal 1 00:05:34.588 02:25:42 event -- scripts/common.sh@353 -- # local d=1 00:05:34.588 02:25:42 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.589 02:25:42 event -- scripts/common.sh@355 -- # echo 1 00:05:34.589 02:25:42 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.589 02:25:42 event -- scripts/common.sh@366 -- # decimal 2 00:05:34.589 02:25:42 event -- scripts/common.sh@353 -- # local d=2 00:05:34.589 02:25:42 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.589 02:25:42 event -- scripts/common.sh@355 -- # echo 2 00:05:34.589 02:25:42 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.589 02:25:42 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.589 02:25:42 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.589 02:25:42 event -- scripts/common.sh@368 -- # return 0 00:05:34.589 02:25:42 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.589 02:25:42 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.589 --rc genhtml_branch_coverage=1 00:05:34.589 --rc genhtml_function_coverage=1 00:05:34.589 --rc genhtml_legend=1 00:05:34.589 --rc geninfo_all_blocks=1 00:05:34.589 --rc geninfo_unexecuted_blocks=1 00:05:34.589 00:05:34.589 ' 00:05:34.589 02:25:42 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.589 --rc genhtml_branch_coverage=1 00:05:34.589 --rc genhtml_function_coverage=1 00:05:34.589 --rc genhtml_legend=1 00:05:34.589 --rc geninfo_all_blocks=1 00:05:34.589 --rc geninfo_unexecuted_blocks=1 00:05:34.589 00:05:34.589 ' 00:05:34.589 02:25:42 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.589 --rc genhtml_branch_coverage=1 00:05:34.589 --rc genhtml_function_coverage=1 00:05:34.589 --rc genhtml_legend=1 00:05:34.589 --rc geninfo_all_blocks=1 00:05:34.589 --rc geninfo_unexecuted_blocks=1 00:05:34.589 00:05:34.589 ' 00:05:34.589 02:25:42 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.589 --rc genhtml_branch_coverage=1 00:05:34.589 --rc genhtml_function_coverage=1 00:05:34.589 --rc genhtml_legend=1 00:05:34.589 --rc geninfo_all_blocks=1 00:05:34.589 --rc geninfo_unexecuted_blocks=1 00:05:34.589 00:05:34.589 ' 00:05:34.589 02:25:42 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:34.589 02:25:42 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:34.589 02:25:42 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:34.589 02:25:42 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:34.589 02:25:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.589 02:25:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.589 ************************************ 00:05:34.589 START TEST event_perf 00:05:34.589 ************************************ 00:05:34.589 02:25:42 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:34.589 Running I/O for 1 seconds...[2024-11-17 02:25:42.911716] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:34.589 [2024-11-17 02:25:42.911830] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2824928 ] 00:05:34.847 [2024-11-17 02:25:43.055864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:34.847 [2024-11-17 02:25:43.202457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.847 [2024-11-17 02:25:43.202526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.848 [2024-11-17 02:25:43.202619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.848 [2024-11-17 02:25:43.202644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.221 Running I/O for 1 seconds... 00:05:36.221 lcore 0: 222922 00:05:36.221 lcore 1: 222921 00:05:36.221 lcore 2: 222921 00:05:36.221 lcore 3: 222921 00:05:36.221 done. 00:05:36.221 00:05:36.221 real 0m1.598s 00:05:36.221 user 0m4.426s 00:05:36.221 sys 0m0.158s 00:05:36.221 02:25:44 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.221 02:25:44 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.221 ************************************ 00:05:36.221 END TEST event_perf 00:05:36.221 ************************************ 00:05:36.221 02:25:44 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:36.221 02:25:44 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:36.221 02:25:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.221 02:25:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.221 ************************************ 00:05:36.221 START TEST event_reactor 00:05:36.221 ************************************ 00:05:36.221 02:25:44 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:36.221 [2024-11-17 02:25:44.567120] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:36.221 [2024-11-17 02:25:44.567235] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825086 ] 00:05:36.479 [2024-11-17 02:25:44.709393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.479 [2024-11-17 02:25:44.847421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.853 test_start 00:05:37.853 oneshot 00:05:37.853 tick 100 00:05:37.853 tick 100 00:05:37.853 tick 250 00:05:37.853 tick 100 00:05:37.853 tick 100 00:05:37.853 tick 100 00:05:37.853 tick 250 00:05:37.853 tick 500 00:05:37.853 tick 100 00:05:37.853 tick 100 00:05:37.853 tick 250 00:05:37.853 tick 100 00:05:37.853 tick 100 00:05:37.853 test_end 00:05:37.853 00:05:37.853 real 0m1.576s 00:05:37.853 user 0m1.427s 00:05:37.853 sys 0m0.141s 00:05:37.853 02:25:46 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.853 02:25:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:37.853 ************************************ 00:05:37.853 END TEST event_reactor 00:05:37.853 ************************************ 00:05:37.853 02:25:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:37.853 02:25:46 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:37.853 02:25:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.853 02:25:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.853 ************************************ 00:05:37.853 START TEST event_reactor_perf 00:05:37.853 ************************************ 00:05:37.853 02:25:46 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:37.853 [2024-11-17 02:25:46.191319] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:37.853 [2024-11-17 02:25:46.191466] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825372 ] 00:05:38.112 [2024-11-17 02:25:46.333183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.112 [2024-11-17 02:25:46.471518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.538 test_start 00:05:39.538 test_end 00:05:39.538 Performance: 268079 events per second 00:05:39.538 00:05:39.538 real 0m1.570s 00:05:39.538 user 0m1.419s 00:05:39.538 sys 0m0.142s 00:05:39.538 02:25:47 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.538 02:25:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:39.538 ************************************ 00:05:39.538 END TEST event_reactor_perf 00:05:39.538 ************************************ 00:05:39.538 02:25:47 event -- event/event.sh@49 -- # uname -s 00:05:39.538 02:25:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:39.538 02:25:47 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:39.538 02:25:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.538 02:25:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.538 02:25:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.538 ************************************ 00:05:39.538 START TEST event_scheduler 00:05:39.538 ************************************ 00:05:39.538 02:25:47 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:39.538 * Looking for test storage... 00:05:39.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:39.538 02:25:47 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:39.538 02:25:47 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:39.538 02:25:47 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:39.538 02:25:47 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.539 02:25:47 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:39.539 02:25:47 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.539 02:25:47 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:39.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.539 --rc genhtml_branch_coverage=1 00:05:39.539 --rc genhtml_function_coverage=1 00:05:39.539 --rc genhtml_legend=1 00:05:39.539 --rc geninfo_all_blocks=1 00:05:39.539 --rc geninfo_unexecuted_blocks=1 00:05:39.539 00:05:39.539 ' 00:05:39.539 02:25:47 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:39.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.539 --rc genhtml_branch_coverage=1 00:05:39.539 --rc genhtml_function_coverage=1 00:05:39.539 --rc genhtml_legend=1 00:05:39.539 --rc geninfo_all_blocks=1 00:05:39.539 --rc geninfo_unexecuted_blocks=1 00:05:39.539 00:05:39.539 ' 00:05:39.539 02:25:47 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:39.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.539 --rc genhtml_branch_coverage=1 00:05:39.539 --rc genhtml_function_coverage=1 00:05:39.539 --rc genhtml_legend=1 00:05:39.539 --rc geninfo_all_blocks=1 00:05:39.539 --rc geninfo_unexecuted_blocks=1 00:05:39.539 00:05:39.539 ' 00:05:39.539 02:25:47 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:39.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.539 --rc genhtml_branch_coverage=1 00:05:39.539 --rc genhtml_function_coverage=1 00:05:39.539 --rc genhtml_legend=1 00:05:39.539 --rc geninfo_all_blocks=1 00:05:39.539 --rc geninfo_unexecuted_blocks=1 00:05:39.539 00:05:39.539 ' 00:05:39.539 02:25:47 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:39.539 02:25:47 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2825569 00:05:39.539 02:25:47 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:39.539 02:25:47 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.539 02:25:47 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2825569 00:05:39.539 02:25:47 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2825569 ']' 00:05:39.539 02:25:47 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.539 02:25:47 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.539 02:25:47 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.539 02:25:47 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.539 02:25:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.798 [2024-11-17 02:25:48.004801] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:39.798 [2024-11-17 02:25:48.004947] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825569 ] 00:05:39.798 [2024-11-17 02:25:48.145670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:40.056 [2024-11-17 02:25:48.273498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.056 [2024-11-17 02:25:48.273562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.056 [2024-11-17 02:25:48.273604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.056 [2024-11-17 02:25:48.273615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:40.623 02:25:48 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.623 02:25:48 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:40.623 02:25:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:40.623 02:25:48 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.623 02:25:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.623 [2024-11-17 02:25:48.972758] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:40.623 [2024-11-17 02:25:48.972812] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:40.623 [2024-11-17 02:25:48.972847] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:40.623 [2024-11-17 02:25:48.972866] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:40.623 [2024-11-17 02:25:48.972887] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:40.623 02:25:48 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.623 02:25:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:40.623 02:25:48 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.623 02:25:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.880 [2024-11-17 02:25:49.298307] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:40.880 02:25:49 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.880 02:25:49 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:40.880 02:25:49 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.880 02:25:49 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.880 02:25:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.880 ************************************ 00:05:40.880 START TEST scheduler_create_thread 00:05:40.880 ************************************ 00:05:40.880 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:40.880 02:25:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:40.880 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.880 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.138 2 00:05:41.138 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.138 02:25:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:41.138 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.138 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.138 3 00:05:41.138 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.138 02:25:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:41.138 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.138 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.138 4 00:05:41.138 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.138 02:25:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:41.138 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.138 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.138 5 00:05:41.138 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.138 02:25:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:41.138 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.138 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.138 6 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.139 7 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.139 8 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.139 9 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.139 10 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.139 00:05:41.139 real 0m0.111s 00:05:41.139 user 0m0.010s 00:05:41.139 sys 0m0.004s 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.139 02:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.139 ************************************ 00:05:41.139 END TEST scheduler_create_thread 00:05:41.139 ************************************ 00:05:41.139 02:25:49 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:41.139 02:25:49 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2825569 00:05:41.139 02:25:49 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2825569 ']' 00:05:41.139 02:25:49 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2825569 00:05:41.139 02:25:49 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:41.139 02:25:49 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.139 02:25:49 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2825569 00:05:41.139 02:25:49 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:41.139 02:25:49 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:41.139 02:25:49 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2825569' 00:05:41.139 killing process with pid 2825569 00:05:41.139 02:25:49 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2825569 00:05:41.139 02:25:49 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2825569 00:05:41.706 [2024-11-17 02:25:49.925516] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:42.643 00:05:42.643 real 0m3.140s 00:05:42.643 user 0m5.410s 00:05:42.643 sys 0m0.524s 00:05:42.643 02:25:50 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.643 02:25:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.643 ************************************ 00:05:42.643 END TEST event_scheduler 00:05:42.643 ************************************ 00:05:42.643 02:25:50 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:42.643 02:25:50 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:42.643 02:25:50 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.643 02:25:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.644 02:25:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.644 ************************************ 00:05:42.644 START TEST app_repeat 00:05:42.644 ************************************ 00:05:42.644 02:25:50 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:42.644 02:25:50 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.644 02:25:50 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.644 02:25:50 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:42.644 02:25:50 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.644 02:25:50 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:42.644 02:25:50 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:42.644 02:25:50 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:42.644 02:25:50 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2826019 00:05:42.644 02:25:50 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:42.644 02:25:50 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.644 02:25:50 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2826019' 00:05:42.644 Process app_repeat pid: 2826019 00:05:42.644 02:25:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:42.644 02:25:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:42.644 spdk_app_start Round 0 00:05:42.644 02:25:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2826019 /var/tmp/spdk-nbd.sock 00:05:42.644 02:25:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2826019 ']' 00:05:42.644 02:25:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.644 02:25:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.644 02:25:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.644 02:25:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.644 02:25:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.644 [2024-11-17 02:25:51.025311] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:42.644 [2024-11-17 02:25:51.025455] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2826019 ] 00:05:42.902 [2024-11-17 02:25:51.169149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.902 [2024-11-17 02:25:51.308735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.902 [2024-11-17 02:25:51.308740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.836 02:25:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.837 02:25:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:43.837 02:25:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.095 Malloc0 00:05:44.095 02:25:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.353 Malloc1 00:05:44.353 02:25:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.353 02:25:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.353 02:25:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.353 02:25:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:44.353 02:25:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.353 02:25:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:44.353 02:25:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.353 02:25:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.353 02:25:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.354 02:25:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:44.354 02:25:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.354 02:25:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:44.354 02:25:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:44.354 02:25:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:44.354 02:25:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.354 02:25:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.612 /dev/nbd0 00:05:44.612 02:25:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.612 02:25:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.612 02:25:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:44.612 02:25:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:44.612 02:25:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:44.612 02:25:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:44.612 02:25:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:44.612 02:25:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:44.612 02:25:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:44.612 02:25:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:44.612 02:25:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.612 1+0 records in 00:05:44.612 1+0 records out 00:05:44.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249947 s, 16.4 MB/s 00:05:44.612 02:25:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.612 02:25:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:44.612 02:25:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.612 02:25:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:44.612 02:25:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:44.612 02:25:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.612 02:25:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.612 02:25:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.178 /dev/nbd1 00:05:45.178 02:25:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.178 02:25:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.178 02:25:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:45.178 02:25:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:45.178 02:25:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:45.178 02:25:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:45.178 02:25:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:45.178 02:25:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:45.178 02:25:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:45.178 02:25:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:45.178 02:25:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.178 1+0 records in 00:05:45.178 1+0 records out 00:05:45.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217098 s, 18.9 MB/s 00:05:45.178 02:25:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.178 02:25:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:45.178 02:25:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.178 02:25:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:45.178 02:25:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:45.178 02:25:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.178 02:25:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.178 02:25:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.178 02:25:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.178 02:25:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.436 02:25:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:45.436 { 00:05:45.436 "nbd_device": "/dev/nbd0", 00:05:45.436 "bdev_name": "Malloc0" 00:05:45.436 }, 00:05:45.436 { 00:05:45.436 "nbd_device": "/dev/nbd1", 00:05:45.436 "bdev_name": "Malloc1" 00:05:45.436 } 00:05:45.436 ]' 00:05:45.436 02:25:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.436 { 00:05:45.436 "nbd_device": "/dev/nbd0", 00:05:45.436 "bdev_name": "Malloc0" 00:05:45.436 }, 00:05:45.436 { 00:05:45.436 "nbd_device": "/dev/nbd1", 00:05:45.436 "bdev_name": "Malloc1" 00:05:45.436 } 00:05:45.436 ]' 00:05:45.436 02:25:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.436 02:25:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.436 /dev/nbd1' 00:05:45.436 02:25:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.436 /dev/nbd1' 00:05:45.436 02:25:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.436 02:25:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.436 02:25:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.436 02:25:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.436 02:25:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.437 256+0 records in 00:05:45.437 256+0 records out 00:05:45.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00376763 s, 278 MB/s 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.437 256+0 records in 00:05:45.437 256+0 records out 00:05:45.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250009 s, 41.9 MB/s 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:45.437 256+0 records in 00:05:45.437 256+0 records out 00:05:45.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305104 s, 34.4 MB/s 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.437 02:25:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.694 02:25:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.694 02:25:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.694 02:25:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.694 02:25:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.694 02:25:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.694 02:25:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.694 02:25:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.694 02:25:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.694 02:25:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.694 02:25:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.951 02:25:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.951 02:25:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.951 02:25:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.951 02:25:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.951 02:25:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.951 02:25:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.951 02:25:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.951 02:25:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.951 02:25:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.951 02:25:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.951 02:25:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.209 02:25:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.209 02:25:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.209 02:25:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.209 02:25:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.209 02:25:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.209 02:25:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.209 02:25:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:46.209 02:25:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.209 02:25:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.209 02:25:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.209 02:25:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.209 02:25:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.209 02:25:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.775 02:25:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:48.151 [2024-11-17 02:25:56.330212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.151 [2024-11-17 02:25:56.465132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.151 [2024-11-17 02:25:56.465138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.409 [2024-11-17 02:25:56.680856] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.409 [2024-11-17 02:25:56.680944] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.782 02:25:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:49.782 02:25:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:49.782 spdk_app_start Round 1 00:05:49.782 02:25:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2826019 /var/tmp/spdk-nbd.sock 00:05:49.782 02:25:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2826019 ']' 00:05:49.782 02:25:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.782 02:25:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.782 02:25:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.782 02:25:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.782 02:25:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.040 02:25:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.040 02:25:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:50.040 02:25:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.299 Malloc0 00:05:50.299 02:25:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.865 Malloc1 00:05:50.865 02:25:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.865 02:25:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.865 02:25:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.865 02:25:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.865 02:25:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.865 02:25:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.865 02:25:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.865 02:25:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.865 02:25:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.865 02:25:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.865 02:25:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.865 02:25:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.865 02:25:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:50.865 02:25:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.865 02:25:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.865 02:25:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.124 /dev/nbd0 00:05:51.124 02:25:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.124 02:25:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.124 02:25:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:51.124 02:25:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:51.124 02:25:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.124 02:25:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.124 02:25:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:51.124 02:25:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:51.124 02:25:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.124 02:25:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.124 02:25:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.124 1+0 records in 00:05:51.124 1+0 records out 00:05:51.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211631 s, 19.4 MB/s 00:05:51.124 02:25:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.124 02:25:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:51.124 02:25:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.124 02:25:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.124 02:25:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:51.124 02:25:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.124 02:25:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.124 02:25:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.382 /dev/nbd1 00:05:51.382 02:25:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.382 02:25:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.382 02:25:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:51.382 02:25:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:51.382 02:25:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.382 02:25:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.383 02:25:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:51.383 02:25:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:51.383 02:25:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.383 02:25:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.383 02:25:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.383 1+0 records in 00:05:51.383 1+0 records out 00:05:51.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214575 s, 19.1 MB/s 00:05:51.383 02:25:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.383 02:25:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:51.383 02:25:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.383 02:25:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.383 02:25:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:51.383 02:25:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.383 02:25:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.383 02:25:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.383 02:25:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.383 02:25:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.641 02:25:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.641 { 00:05:51.641 "nbd_device": "/dev/nbd0", 00:05:51.641 "bdev_name": "Malloc0" 00:05:51.641 }, 00:05:51.641 { 00:05:51.641 "nbd_device": "/dev/nbd1", 00:05:51.641 "bdev_name": "Malloc1" 00:05:51.641 } 00:05:51.641 ]' 00:05:51.641 02:25:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.641 { 00:05:51.641 "nbd_device": "/dev/nbd0", 00:05:51.641 "bdev_name": "Malloc0" 00:05:51.641 }, 00:05:51.641 { 00:05:51.641 "nbd_device": "/dev/nbd1", 00:05:51.641 "bdev_name": "Malloc1" 00:05:51.641 } 00:05:51.641 ]' 00:05:51.641 02:25:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.641 02:26:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.641 /dev/nbd1' 00:05:51.641 02:26:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.641 /dev/nbd1' 00:05:51.641 02:26:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.641 02:26:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.641 02:26:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.641 02:26:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.641 02:26:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.642 02:26:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.642 02:26:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.642 02:26:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.642 02:26:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.642 02:26:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.642 02:26:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.642 02:26:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.642 256+0 records in 00:05:51.642 256+0 records out 00:05:51.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00393007 s, 267 MB/s 00:05:51.642 02:26:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.642 02:26:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.642 256+0 records in 00:05:51.642 256+0 records out 00:05:51.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284553 s, 36.8 MB/s 00:05:51.642 02:26:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.642 02:26:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.900 256+0 records in 00:05:51.900 256+0 records out 00:05:51.900 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0300029 s, 34.9 MB/s 00:05:51.900 02:26:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.900 02:26:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.900 02:26:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.900 02:26:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.900 02:26:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.900 02:26:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.900 02:26:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.900 02:26:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.900 02:26:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.900 02:26:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.900 02:26:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.900 02:26:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.900 02:26:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.900 02:26:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.900 02:26:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.900 02:26:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.900 02:26:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.900 02:26:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.900 02:26:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.159 02:26:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.159 02:26:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.159 02:26:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.159 02:26:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.159 02:26:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.159 02:26:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.159 02:26:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.159 02:26:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.159 02:26:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.159 02:26:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.416 02:26:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.416 02:26:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.416 02:26:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.416 02:26:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.416 02:26:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.416 02:26:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.416 02:26:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.416 02:26:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.416 02:26:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.417 02:26:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.417 02:26:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.674 02:26:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.674 02:26:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.674 02:26:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.674 02:26:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.674 02:26:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.674 02:26:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.674 02:26:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.674 02:26:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.674 02:26:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.674 02:26:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.674 02:26:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.674 02:26:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.674 02:26:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:53.241 02:26:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:54.616 [2024-11-17 02:26:02.671036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.616 [2024-11-17 02:26:02.804710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.616 [2024-11-17 02:26:02.804711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.616 [2024-11-17 02:26:03.019435] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.616 [2024-11-17 02:26:03.019538] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:56.515 02:26:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:56.515 02:26:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:56.515 spdk_app_start Round 2 00:05:56.515 02:26:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2826019 /var/tmp/spdk-nbd.sock 00:05:56.515 02:26:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2826019 ']' 00:05:56.515 02:26:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.515 02:26:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.515 02:26:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.515 02:26:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.515 02:26:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.515 02:26:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.515 02:26:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:56.515 02:26:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.773 Malloc0 00:05:56.773 02:26:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.031 Malloc1 00:05:57.031 02:26:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.031 02:26:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.031 02:26:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.031 02:26:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:57.031 02:26:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.031 02:26:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:57.031 02:26:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.031 02:26:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.031 02:26:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.031 02:26:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:57.031 02:26:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.031 02:26:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:57.031 02:26:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:57.031 02:26:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:57.031 02:26:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.031 02:26:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:57.289 /dev/nbd0 00:05:57.289 02:26:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:57.289 02:26:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:57.289 02:26:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:57.289 02:26:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:57.289 02:26:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.289 02:26:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.289 02:26:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:57.289 02:26:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:57.289 02:26:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.289 02:26:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.289 02:26:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.548 1+0 records in 00:05:57.548 1+0 records out 00:05:57.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000146008 s, 28.1 MB/s 00:05:57.548 02:26:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.548 02:26:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:57.548 02:26:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.548 02:26:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.548 02:26:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:57.548 02:26:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.548 02:26:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.548 02:26:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:57.806 /dev/nbd1 00:05:57.806 02:26:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:57.806 02:26:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:57.806 02:26:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:57.806 02:26:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:57.806 02:26:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.806 02:26:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.806 02:26:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:57.806 02:26:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:57.806 02:26:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.806 02:26:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.806 02:26:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.806 1+0 records in 00:05:57.806 1+0 records out 00:05:57.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023531 s, 17.4 MB/s 00:05:57.806 02:26:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.806 02:26:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:57.806 02:26:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.806 02:26:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.806 02:26:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:57.806 02:26:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.806 02:26:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.806 02:26:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.806 02:26:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.806 02:26:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.064 02:26:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.064 { 00:05:58.064 "nbd_device": "/dev/nbd0", 00:05:58.064 "bdev_name": "Malloc0" 00:05:58.064 }, 00:05:58.064 { 00:05:58.064 "nbd_device": "/dev/nbd1", 00:05:58.064 "bdev_name": "Malloc1" 00:05:58.064 } 00:05:58.064 ]' 00:05:58.064 02:26:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.064 { 00:05:58.064 "nbd_device": "/dev/nbd0", 00:05:58.064 "bdev_name": "Malloc0" 00:05:58.064 }, 00:05:58.064 { 00:05:58.064 "nbd_device": "/dev/nbd1", 00:05:58.064 "bdev_name": "Malloc1" 00:05:58.064 } 00:05:58.064 ]' 00:05:58.064 02:26:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.064 02:26:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.064 /dev/nbd1' 00:05:58.064 02:26:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.064 /dev/nbd1' 00:05:58.064 02:26:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.064 02:26:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:58.064 02:26:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:58.064 02:26:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:58.064 02:26:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:58.064 02:26:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:58.064 02:26:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.064 02:26:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.064 02:26:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.064 02:26:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.064 02:26:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.064 02:26:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:58.065 256+0 records in 00:05:58.065 256+0 records out 00:05:58.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00517306 s, 203 MB/s 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.065 256+0 records in 00:05:58.065 256+0 records out 00:05:58.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260503 s, 40.3 MB/s 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.065 256+0 records in 00:05:58.065 256+0 records out 00:05:58.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0318326 s, 32.9 MB/s 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.065 02:26:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:58.322 02:26:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:58.323 02:26:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:58.323 02:26:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:58.323 02:26:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.323 02:26:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.323 02:26:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:58.323 02:26:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.323 02:26:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.323 02:26:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.323 02:26:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:58.889 02:26:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:58.889 02:26:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:58.889 02:26:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:58.889 02:26:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.889 02:26:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.889 02:26:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:58.889 02:26:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.889 02:26:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.889 02:26:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.889 02:26:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.889 02:26:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.889 02:26:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:58.889 02:26:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:58.889 02:26:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.147 02:26:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.147 02:26:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.147 02:26:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.147 02:26:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:59.147 02:26:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.147 02:26:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.147 02:26:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.147 02:26:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.147 02:26:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.147 02:26:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:59.405 02:26:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:00.780 [2024-11-17 02:26:09.031224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.780 [2024-11-17 02:26:09.167517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.780 [2024-11-17 02:26:09.167522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.038 [2024-11-17 02:26:09.381585] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.038 [2024-11-17 02:26:09.381677] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:02.412 02:26:10 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2826019 /var/tmp/spdk-nbd.sock 00:06:02.412 02:26:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2826019 ']' 00:06:02.412 02:26:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.412 02:26:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.412 02:26:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.412 02:26:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.412 02:26:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.673 02:26:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.673 02:26:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:02.673 02:26:11 event.app_repeat -- event/event.sh@39 -- # killprocess 2826019 00:06:02.673 02:26:11 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2826019 ']' 00:06:02.673 02:26:11 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2826019 00:06:02.673 02:26:11 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:02.673 02:26:11 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.673 02:26:11 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2826019 00:06:02.932 02:26:11 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.932 02:26:11 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.932 02:26:11 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2826019' 00:06:02.932 killing process with pid 2826019 00:06:02.932 02:26:11 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2826019 00:06:02.932 02:26:11 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2826019 00:06:03.868 spdk_app_start is called in Round 0. 00:06:03.868 Shutdown signal received, stop current app iteration 00:06:03.868 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:06:03.868 spdk_app_start is called in Round 1. 00:06:03.868 Shutdown signal received, stop current app iteration 00:06:03.868 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:06:03.868 spdk_app_start is called in Round 2. 00:06:03.868 Shutdown signal received, stop current app iteration 00:06:03.868 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:06:03.868 spdk_app_start is called in Round 3. 00:06:03.868 Shutdown signal received, stop current app iteration 00:06:03.868 02:26:12 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:03.868 02:26:12 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:03.868 00:06:03.868 real 0m21.216s 00:06:03.868 user 0m45.218s 00:06:03.868 sys 0m3.378s 00:06:03.868 02:26:12 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.868 02:26:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.868 ************************************ 00:06:03.868 END TEST app_repeat 00:06:03.868 ************************************ 00:06:03.868 02:26:12 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:03.868 02:26:12 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:03.868 02:26:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.868 02:26:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.868 02:26:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.868 ************************************ 00:06:03.868 START TEST cpu_locks 00:06:03.868 ************************************ 00:06:03.868 02:26:12 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:03.868 * Looking for test storage... 00:06:03.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:03.868 02:26:12 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:03.868 02:26:12 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:03.868 02:26:12 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:04.127 02:26:12 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.127 02:26:12 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:04.127 02:26:12 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.127 02:26:12 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:04.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.127 --rc genhtml_branch_coverage=1 00:06:04.127 --rc genhtml_function_coverage=1 00:06:04.127 --rc genhtml_legend=1 00:06:04.127 --rc geninfo_all_blocks=1 00:06:04.127 --rc geninfo_unexecuted_blocks=1 00:06:04.127 00:06:04.127 ' 00:06:04.127 02:26:12 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:04.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.127 --rc genhtml_branch_coverage=1 00:06:04.127 --rc genhtml_function_coverage=1 00:06:04.127 --rc genhtml_legend=1 00:06:04.127 --rc geninfo_all_blocks=1 00:06:04.127 --rc geninfo_unexecuted_blocks=1 00:06:04.127 00:06:04.127 ' 00:06:04.127 02:26:12 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:04.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.127 --rc genhtml_branch_coverage=1 00:06:04.127 --rc genhtml_function_coverage=1 00:06:04.127 --rc genhtml_legend=1 00:06:04.127 --rc geninfo_all_blocks=1 00:06:04.127 --rc geninfo_unexecuted_blocks=1 00:06:04.127 00:06:04.127 ' 00:06:04.127 02:26:12 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:04.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.127 --rc genhtml_branch_coverage=1 00:06:04.127 --rc genhtml_function_coverage=1 00:06:04.127 --rc genhtml_legend=1 00:06:04.127 --rc geninfo_all_blocks=1 00:06:04.127 --rc geninfo_unexecuted_blocks=1 00:06:04.127 00:06:04.127 ' 00:06:04.127 02:26:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:04.127 02:26:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:04.127 02:26:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:04.127 02:26:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:04.127 02:26:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.127 02:26:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.127 02:26:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.127 ************************************ 00:06:04.127 START TEST default_locks 00:06:04.127 ************************************ 00:06:04.127 02:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:04.127 02:26:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2828775 00:06:04.127 02:26:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.127 02:26:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2828775 00:06:04.127 02:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2828775 ']' 00:06:04.127 02:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.127 02:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.127 02:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.127 02:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.127 02:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.127 [2024-11-17 02:26:12.519621] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:04.127 [2024-11-17 02:26:12.519775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828775 ] 00:06:04.386 [2024-11-17 02:26:12.655444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.386 [2024-11-17 02:26:12.788010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.321 02:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.321 02:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:05.321 02:26:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2828775 00:06:05.321 02:26:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2828775 00:06:05.321 02:26:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.579 lslocks: write error 00:06:05.579 02:26:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2828775 00:06:05.579 02:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2828775 ']' 00:06:05.580 02:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2828775 00:06:05.580 02:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:05.580 02:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.580 02:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2828775 00:06:05.580 02:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.580 02:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.580 02:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2828775' 00:06:05.580 killing process with pid 2828775 00:06:05.580 02:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2828775 00:06:05.580 02:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2828775 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2828775 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2828775 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2828775 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2828775 ']' 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2828775) - No such process 00:06:08.108 ERROR: process (pid: 2828775) is no longer running 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:08.108 00:06:08.108 real 0m3.977s 00:06:08.108 user 0m4.025s 00:06:08.108 sys 0m0.695s 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.108 02:26:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.108 ************************************ 00:06:08.108 END TEST default_locks 00:06:08.108 ************************************ 00:06:08.108 02:26:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:08.108 02:26:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.108 02:26:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.108 02:26:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.108 ************************************ 00:06:08.108 START TEST default_locks_via_rpc 00:06:08.108 ************************************ 00:06:08.108 02:26:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:08.108 02:26:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2829332 00:06:08.108 02:26:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.108 02:26:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2829332 00:06:08.108 02:26:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2829332 ']' 00:06:08.108 02:26:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.108 02:26:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.108 02:26:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.108 02:26:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.108 02:26:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.108 [2024-11-17 02:26:16.544940] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:08.108 [2024-11-17 02:26:16.545144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829332 ] 00:06:08.367 [2024-11-17 02:26:16.681375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.367 [2024-11-17 02:26:16.813756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.300 02:26:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.300 02:26:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:09.300 02:26:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:09.300 02:26:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.300 02:26:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.300 02:26:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.300 02:26:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:09.300 02:26:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:09.300 02:26:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:09.300 02:26:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:09.300 02:26:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:09.300 02:26:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.300 02:26:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.300 02:26:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.300 02:26:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2829332 00:06:09.300 02:26:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2829332 00:06:09.300 02:26:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.558 02:26:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2829332 00:06:09.558 02:26:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2829332 ']' 00:06:09.558 02:26:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2829332 00:06:09.558 02:26:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:09.559 02:26:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.559 02:26:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2829332 00:06:09.559 02:26:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.559 02:26:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.559 02:26:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2829332' 00:06:09.559 killing process with pid 2829332 00:06:09.559 02:26:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2829332 00:06:09.559 02:26:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2829332 00:06:12.221 00:06:12.221 real 0m3.979s 00:06:12.221 user 0m3.987s 00:06:12.221 sys 0m0.719s 00:06:12.221 02:26:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.221 02:26:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.221 ************************************ 00:06:12.221 END TEST default_locks_via_rpc 00:06:12.221 ************************************ 00:06:12.221 02:26:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:12.221 02:26:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.221 02:26:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.221 02:26:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.221 ************************************ 00:06:12.221 START TEST non_locking_app_on_locked_coremask 00:06:12.221 ************************************ 00:06:12.221 02:26:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:12.221 02:26:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2829772 00:06:12.221 02:26:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.221 02:26:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2829772 /var/tmp/spdk.sock 00:06:12.221 02:26:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2829772 ']' 00:06:12.221 02:26:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.221 02:26:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.221 02:26:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.221 02:26:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.221 02:26:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.221 [2024-11-17 02:26:20.572004] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:12.221 [2024-11-17 02:26:20.572224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829772 ] 00:06:12.479 [2024-11-17 02:26:20.716437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.479 [2024-11-17 02:26:20.849284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.415 02:26:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.415 02:26:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:13.415 02:26:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2829924 00:06:13.415 02:26:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:13.416 02:26:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2829924 /var/tmp/spdk2.sock 00:06:13.416 02:26:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2829924 ']' 00:06:13.416 02:26:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.416 02:26:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.416 02:26:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.416 02:26:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.416 02:26:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.416 [2024-11-17 02:26:21.872549] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:13.416 [2024-11-17 02:26:21.872696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829924 ] 00:06:13.674 [2024-11-17 02:26:22.083749] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.674 [2024-11-17 02:26:22.083840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.932 [2024-11-17 02:26:22.363675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.461 02:26:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.461 02:26:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:16.461 02:26:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2829772 00:06:16.461 02:26:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2829772 00:06:16.461 02:26:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.720 lslocks: write error 00:06:16.720 02:26:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2829772 00:06:16.720 02:26:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2829772 ']' 00:06:16.720 02:26:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2829772 00:06:16.720 02:26:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:16.720 02:26:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.720 02:26:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2829772 00:06:16.720 02:26:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.720 02:26:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.720 02:26:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2829772' 00:06:16.720 killing process with pid 2829772 00:06:16.720 02:26:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2829772 00:06:16.720 02:26:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2829772 00:06:21.997 02:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2829924 00:06:21.997 02:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2829924 ']' 00:06:21.997 02:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2829924 00:06:21.997 02:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:21.997 02:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.997 02:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2829924 00:06:21.997 02:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.997 02:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.997 02:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2829924' 00:06:21.997 killing process with pid 2829924 00:06:21.997 02:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2829924 00:06:21.997 02:26:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2829924 00:06:24.527 00:06:24.527 real 0m11.922s 00:06:24.527 user 0m12.302s 00:06:24.527 sys 0m1.495s 00:06:24.527 02:26:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.527 02:26:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.527 ************************************ 00:06:24.527 END TEST non_locking_app_on_locked_coremask 00:06:24.527 ************************************ 00:06:24.527 02:26:32 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:24.527 02:26:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.527 02:26:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.527 02:26:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.527 ************************************ 00:06:24.527 START TEST locking_app_on_unlocked_coremask 00:06:24.527 ************************************ 00:06:24.527 02:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:24.527 02:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2831263 00:06:24.527 02:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:24.527 02:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2831263 /var/tmp/spdk.sock 00:06:24.527 02:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2831263 ']' 00:06:24.527 02:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.527 02:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.527 02:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.527 02:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.527 02:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.527 [2024-11-17 02:26:32.546994] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:24.527 [2024-11-17 02:26:32.547134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831263 ] 00:06:24.527 [2024-11-17 02:26:32.688253] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.527 [2024-11-17 02:26:32.688309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.527 [2024-11-17 02:26:32.824398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.462 02:26:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.462 02:26:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:25.462 02:26:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2831406 00:06:25.462 02:26:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:25.462 02:26:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2831406 /var/tmp/spdk2.sock 00:06:25.462 02:26:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2831406 ']' 00:06:25.462 02:26:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.462 02:26:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.462 02:26:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.462 02:26:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.462 02:26:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.462 [2024-11-17 02:26:33.875594] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:25.462 [2024-11-17 02:26:33.875728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831406 ] 00:06:25.721 [2024-11-17 02:26:34.088743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.980 [2024-11-17 02:26:34.368368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.511 02:26:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.511 02:26:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:28.511 02:26:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2831406 00:06:28.511 02:26:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2831406 00:06:28.511 02:26:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.511 lslocks: write error 00:06:28.511 02:26:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2831263 00:06:28.511 02:26:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2831263 ']' 00:06:28.511 02:26:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2831263 00:06:28.511 02:26:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:28.511 02:26:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.511 02:26:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2831263 00:06:28.511 02:26:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.511 02:26:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.511 02:26:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2831263' 00:06:28.511 killing process with pid 2831263 00:06:28.512 02:26:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2831263 00:06:28.512 02:26:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2831263 00:06:33.780 02:26:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2831406 00:06:33.780 02:26:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2831406 ']' 00:06:33.780 02:26:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2831406 00:06:33.780 02:26:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:33.780 02:26:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.780 02:26:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2831406 00:06:33.780 02:26:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.780 02:26:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.780 02:26:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2831406' 00:06:33.780 killing process with pid 2831406 00:06:33.780 02:26:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2831406 00:06:33.780 02:26:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2831406 00:06:36.311 00:06:36.311 real 0m11.830s 00:06:36.311 user 0m12.189s 00:06:36.311 sys 0m1.439s 00:06:36.311 02:26:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.311 02:26:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.311 ************************************ 00:06:36.311 END TEST locking_app_on_unlocked_coremask 00:06:36.311 ************************************ 00:06:36.311 02:26:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:36.311 02:26:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.311 02:26:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.311 02:26:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.311 ************************************ 00:06:36.311 START TEST locking_app_on_locked_coremask 00:06:36.311 ************************************ 00:06:36.311 02:26:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:36.311 02:26:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2832644 00:06:36.311 02:26:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.311 02:26:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2832644 /var/tmp/spdk.sock 00:06:36.311 02:26:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2832644 ']' 00:06:36.311 02:26:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.311 02:26:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.311 02:26:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.311 02:26:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.311 02:26:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.311 [2024-11-17 02:26:44.426581] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:36.311 [2024-11-17 02:26:44.426717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832644 ] 00:06:36.311 [2024-11-17 02:26:44.571389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.311 [2024-11-17 02:26:44.697842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.246 02:26:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.246 02:26:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:37.246 02:26:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2832900 00:06:37.246 02:26:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:37.246 02:26:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2832900 /var/tmp/spdk2.sock 00:06:37.246 02:26:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:37.246 02:26:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2832900 /var/tmp/spdk2.sock 00:06:37.246 02:26:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:37.246 02:26:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.246 02:26:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:37.246 02:26:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.246 02:26:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2832900 /var/tmp/spdk2.sock 00:06:37.246 02:26:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2832900 ']' 00:06:37.246 02:26:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.246 02:26:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.246 02:26:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.246 02:26:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.246 02:26:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.505 [2024-11-17 02:26:45.754918] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:37.505 [2024-11-17 02:26:45.755057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832900 ] 00:06:37.505 [2024-11-17 02:26:45.948185] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2832644 has claimed it. 00:06:37.505 [2024-11-17 02:26:45.948284] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:38.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2832900) - No such process 00:06:38.071 ERROR: process (pid: 2832900) is no longer running 00:06:38.071 02:26:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.071 02:26:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:38.071 02:26:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:38.071 02:26:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:38.071 02:26:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:38.071 02:26:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:38.071 02:26:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2832644 00:06:38.071 02:26:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2832644 00:06:38.071 02:26:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.329 lslocks: write error 00:06:38.329 02:26:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2832644 00:06:38.329 02:26:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2832644 ']' 00:06:38.329 02:26:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2832644 00:06:38.329 02:26:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:38.329 02:26:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.329 02:26:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2832644 00:06:38.329 02:26:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.329 02:26:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.329 02:26:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2832644' 00:06:38.329 killing process with pid 2832644 00:06:38.329 02:26:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2832644 00:06:38.329 02:26:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2832644 00:06:40.860 00:06:40.860 real 0m4.890s 00:06:40.860 user 0m5.114s 00:06:40.860 sys 0m0.967s 00:06:40.860 02:26:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.860 02:26:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.860 ************************************ 00:06:40.860 END TEST locking_app_on_locked_coremask 00:06:40.860 ************************************ 00:06:40.860 02:26:49 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:40.860 02:26:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.860 02:26:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.860 02:26:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.860 ************************************ 00:06:40.860 START TEST locking_overlapped_coremask 00:06:40.860 ************************************ 00:06:40.860 02:26:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:40.860 02:26:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2833337 00:06:40.860 02:26:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:40.860 02:26:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2833337 /var/tmp/spdk.sock 00:06:40.860 02:26:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2833337 ']' 00:06:40.860 02:26:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.860 02:26:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.860 02:26:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.860 02:26:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.860 02:26:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.120 [2024-11-17 02:26:49.370779] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:41.120 [2024-11-17 02:26:49.370921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833337 ] 00:06:41.120 [2024-11-17 02:26:49.517956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.379 [2024-11-17 02:26:49.663711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.379 [2024-11-17 02:26:49.663765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.379 [2024-11-17 02:26:49.663770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.315 02:26:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.315 02:26:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:42.315 02:26:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2833475 00:06:42.315 02:26:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:42.315 02:26:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2833475 /var/tmp/spdk2.sock 00:06:42.315 02:26:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:42.315 02:26:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2833475 /var/tmp/spdk2.sock 00:06:42.315 02:26:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:42.315 02:26:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.315 02:26:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:42.315 02:26:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.315 02:26:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2833475 /var/tmp/spdk2.sock 00:06:42.315 02:26:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2833475 ']' 00:06:42.315 02:26:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.315 02:26:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.315 02:26:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.315 02:26:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.315 02:26:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.315 [2024-11-17 02:26:50.696878] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:42.315 [2024-11-17 02:26:50.697019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833475 ] 00:06:42.573 [2024-11-17 02:26:50.895289] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2833337 has claimed it. 00:06:42.573 [2024-11-17 02:26:50.895387] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:43.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2833475) - No such process 00:06:43.140 ERROR: process (pid: 2833475) is no longer running 00:06:43.140 02:26:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.140 02:26:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:43.140 02:26:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:43.140 02:26:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:43.140 02:26:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:43.140 02:26:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:43.140 02:26:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:43.140 02:26:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:43.140 02:26:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:43.141 02:26:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:43.141 02:26:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2833337 00:06:43.141 02:26:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2833337 ']' 00:06:43.141 02:26:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2833337 00:06:43.141 02:26:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:43.141 02:26:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.141 02:26:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2833337 00:06:43.141 02:26:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.141 02:26:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.141 02:26:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2833337' 00:06:43.141 killing process with pid 2833337 00:06:43.141 02:26:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2833337 00:06:43.141 02:26:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2833337 00:06:45.670 00:06:45.670 real 0m4.321s 00:06:45.670 user 0m11.739s 00:06:45.670 sys 0m0.779s 00:06:45.670 02:26:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.670 02:26:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.670 ************************************ 00:06:45.670 END TEST locking_overlapped_coremask 00:06:45.670 ************************************ 00:06:45.670 02:26:53 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:45.670 02:26:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.670 02:26:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.670 02:26:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.670 ************************************ 00:06:45.670 START TEST locking_overlapped_coremask_via_rpc 00:06:45.670 ************************************ 00:06:45.670 02:26:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:45.670 02:26:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2833898 00:06:45.670 02:26:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:45.670 02:26:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2833898 /var/tmp/spdk.sock 00:06:45.670 02:26:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2833898 ']' 00:06:45.670 02:26:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.670 02:26:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.670 02:26:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.670 02:26:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.670 02:26:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.670 [2024-11-17 02:26:53.741634] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:45.670 [2024-11-17 02:26:53.741766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833898 ] 00:06:45.670 [2024-11-17 02:26:53.884351] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.670 [2024-11-17 02:26:53.884426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.670 [2024-11-17 02:26:54.030364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.670 [2024-11-17 02:26:54.030420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.670 [2024-11-17 02:26:54.030431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.606 02:26:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.606 02:26:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:46.606 02:26:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2834047 00:06:46.606 02:26:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2834047 /var/tmp/spdk2.sock 00:06:46.606 02:26:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:46.606 02:26:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2834047 ']' 00:06:46.606 02:26:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.606 02:26:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.606 02:26:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.606 02:26:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.606 02:26:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.864 [2024-11-17 02:26:55.088335] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:46.865 [2024-11-17 02:26:55.088499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834047 ] 00:06:46.865 [2024-11-17 02:26:55.280888] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.865 [2024-11-17 02:26:55.280975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.123 [2024-11-17 02:26:55.542209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.123 [2024-11-17 02:26:55.542251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.123 [2024-11-17 02:26:55.542261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.654 [2024-11-17 02:26:57.792285] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2833898 has claimed it. 00:06:49.654 request: 00:06:49.654 { 00:06:49.654 "method": "framework_enable_cpumask_locks", 00:06:49.654 "req_id": 1 00:06:49.654 } 00:06:49.654 Got JSON-RPC error response 00:06:49.654 response: 00:06:49.654 { 00:06:49.654 "code": -32603, 00:06:49.654 "message": "Failed to claim CPU core: 2" 00:06:49.654 } 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2833898 /var/tmp/spdk.sock 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2833898 ']' 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.654 02:26:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.654 02:26:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.654 02:26:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:49.654 02:26:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2834047 /var/tmp/spdk2.sock 00:06:49.654 02:26:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2834047 ']' 00:06:49.654 02:26:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.654 02:26:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.654 02:26:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.654 02:26:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.654 02:26:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.911 02:26:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.911 02:26:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:49.911 02:26:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:49.911 02:26:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:49.911 02:26:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:49.912 02:26:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:49.912 00:06:49.912 real 0m4.722s 00:06:49.912 user 0m1.596s 00:06:49.912 sys 0m0.258s 00:06:49.912 02:26:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.912 02:26:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.912 ************************************ 00:06:49.912 END TEST locking_overlapped_coremask_via_rpc 00:06:49.912 ************************************ 00:06:50.169 02:26:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:50.169 02:26:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2833898 ]] 00:06:50.169 02:26:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2833898 00:06:50.169 02:26:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2833898 ']' 00:06:50.169 02:26:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2833898 00:06:50.169 02:26:58 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:50.169 02:26:58 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.169 02:26:58 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2833898 00:06:50.169 02:26:58 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.169 02:26:58 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.169 02:26:58 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2833898' 00:06:50.169 killing process with pid 2833898 00:06:50.169 02:26:58 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2833898 00:06:50.169 02:26:58 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2833898 00:06:52.697 02:27:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2834047 ]] 00:06:52.697 02:27:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2834047 00:06:52.697 02:27:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2834047 ']' 00:06:52.697 02:27:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2834047 00:06:52.697 02:27:00 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:52.697 02:27:00 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.697 02:27:00 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2834047 00:06:52.697 02:27:00 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:52.697 02:27:00 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:52.697 02:27:00 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2834047' 00:06:52.697 killing process with pid 2834047 00:06:52.697 02:27:00 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2834047 00:06:52.697 02:27:00 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2834047 00:06:54.598 02:27:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:54.598 02:27:02 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:54.598 02:27:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2833898 ]] 00:06:54.598 02:27:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2833898 00:06:54.598 02:27:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2833898 ']' 00:06:54.598 02:27:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2833898 00:06:54.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2833898) - No such process 00:06:54.598 02:27:02 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2833898 is not found' 00:06:54.598 Process with pid 2833898 is not found 00:06:54.598 02:27:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2834047 ]] 00:06:54.598 02:27:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2834047 00:06:54.598 02:27:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2834047 ']' 00:06:54.598 02:27:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2834047 00:06:54.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2834047) - No such process 00:06:54.598 02:27:02 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2834047 is not found' 00:06:54.598 Process with pid 2834047 is not found 00:06:54.598 02:27:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:54.598 00:06:54.598 real 0m50.635s 00:06:54.598 user 1m26.590s 00:06:54.598 sys 0m7.646s 00:06:54.598 02:27:02 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.598 02:27:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.598 ************************************ 00:06:54.598 END TEST cpu_locks 00:06:54.598 ************************************ 00:06:54.598 00:06:54.598 real 1m20.181s 00:06:54.598 user 2m24.707s 00:06:54.598 sys 0m12.244s 00:06:54.598 02:27:02 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.598 02:27:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.598 ************************************ 00:06:54.598 END TEST event 00:06:54.598 ************************************ 00:06:54.598 02:27:02 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:54.598 02:27:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.598 02:27:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.598 02:27:02 -- common/autotest_common.sh@10 -- # set +x 00:06:54.598 ************************************ 00:06:54.598 START TEST thread 00:06:54.598 ************************************ 00:06:54.598 02:27:02 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:54.598 * Looking for test storage... 00:06:54.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:54.598 02:27:02 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:54.598 02:27:02 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:54.598 02:27:02 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:54.863 02:27:03 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:54.863 02:27:03 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.863 02:27:03 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.863 02:27:03 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.863 02:27:03 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.863 02:27:03 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.863 02:27:03 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.863 02:27:03 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.863 02:27:03 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.863 02:27:03 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.863 02:27:03 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.863 02:27:03 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.863 02:27:03 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:54.863 02:27:03 thread -- scripts/common.sh@345 -- # : 1 00:06:54.863 02:27:03 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.863 02:27:03 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.863 02:27:03 thread -- scripts/common.sh@365 -- # decimal 1 00:06:54.863 02:27:03 thread -- scripts/common.sh@353 -- # local d=1 00:06:54.863 02:27:03 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.863 02:27:03 thread -- scripts/common.sh@355 -- # echo 1 00:06:54.863 02:27:03 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.863 02:27:03 thread -- scripts/common.sh@366 -- # decimal 2 00:06:54.863 02:27:03 thread -- scripts/common.sh@353 -- # local d=2 00:06:54.863 02:27:03 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.863 02:27:03 thread -- scripts/common.sh@355 -- # echo 2 00:06:54.863 02:27:03 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.863 02:27:03 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.863 02:27:03 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.863 02:27:03 thread -- scripts/common.sh@368 -- # return 0 00:06:54.863 02:27:03 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.863 02:27:03 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:54.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.863 --rc genhtml_branch_coverage=1 00:06:54.863 --rc genhtml_function_coverage=1 00:06:54.863 --rc genhtml_legend=1 00:06:54.863 --rc geninfo_all_blocks=1 00:06:54.863 --rc geninfo_unexecuted_blocks=1 00:06:54.863 00:06:54.863 ' 00:06:54.863 02:27:03 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:54.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.863 --rc genhtml_branch_coverage=1 00:06:54.863 --rc genhtml_function_coverage=1 00:06:54.863 --rc genhtml_legend=1 00:06:54.863 --rc geninfo_all_blocks=1 00:06:54.863 --rc geninfo_unexecuted_blocks=1 00:06:54.863 00:06:54.863 ' 00:06:54.863 02:27:03 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:54.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.863 --rc genhtml_branch_coverage=1 00:06:54.863 --rc genhtml_function_coverage=1 00:06:54.863 --rc genhtml_legend=1 00:06:54.863 --rc geninfo_all_blocks=1 00:06:54.863 --rc geninfo_unexecuted_blocks=1 00:06:54.863 00:06:54.863 ' 00:06:54.863 02:27:03 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:54.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.863 --rc genhtml_branch_coverage=1 00:06:54.863 --rc genhtml_function_coverage=1 00:06:54.863 --rc genhtml_legend=1 00:06:54.863 --rc geninfo_all_blocks=1 00:06:54.863 --rc geninfo_unexecuted_blocks=1 00:06:54.863 00:06:54.863 ' 00:06:54.863 02:27:03 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:54.863 02:27:03 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:54.863 02:27:03 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.863 02:27:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.863 ************************************ 00:06:54.863 START TEST thread_poller_perf 00:06:54.863 ************************************ 00:06:54.863 02:27:03 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:54.863 [2024-11-17 02:27:03.140414] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:54.863 [2024-11-17 02:27:03.140555] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835200 ] 00:06:54.863 [2024-11-17 02:27:03.281593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.171 [2024-11-17 02:27:03.418186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.171 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:56.570 [2024-11-17T01:27:05.030Z] ====================================== 00:06:56.570 [2024-11-17T01:27:05.030Z] busy:2715515284 (cyc) 00:06:56.570 [2024-11-17T01:27:05.030Z] total_run_count: 282000 00:06:56.570 [2024-11-17T01:27:05.030Z] tsc_hz: 2700000000 (cyc) 00:06:56.570 [2024-11-17T01:27:05.030Z] ====================================== 00:06:56.570 [2024-11-17T01:27:05.030Z] poller_cost: 9629 (cyc), 3566 (nsec) 00:06:56.570 00:06:56.570 real 0m1.573s 00:06:56.570 user 0m1.440s 00:06:56.570 sys 0m0.125s 00:06:56.570 02:27:04 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.570 02:27:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.570 ************************************ 00:06:56.570 END TEST thread_poller_perf 00:06:56.570 ************************************ 00:06:56.570 02:27:04 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.570 02:27:04 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:56.570 02:27:04 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.570 02:27:04 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.570 ************************************ 00:06:56.570 START TEST thread_poller_perf 00:06:56.570 ************************************ 00:06:56.570 02:27:04 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.570 [2024-11-17 02:27:04.763797] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:56.570 [2024-11-17 02:27:04.763922] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835413 ] 00:06:56.570 [2024-11-17 02:27:04.909691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.829 [2024-11-17 02:27:05.048054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.829 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:58.204 [2024-11-17T01:27:06.664Z] ====================================== 00:06:58.204 [2024-11-17T01:27:06.664Z] busy:2705328803 (cyc) 00:06:58.204 [2024-11-17T01:27:06.664Z] total_run_count: 3649000 00:06:58.204 [2024-11-17T01:27:06.664Z] tsc_hz: 2700000000 (cyc) 00:06:58.204 [2024-11-17T01:27:06.664Z] ====================================== 00:06:58.204 [2024-11-17T01:27:06.664Z] poller_cost: 741 (cyc), 274 (nsec) 00:06:58.204 00:06:58.204 real 0m1.573s 00:06:58.204 user 0m1.419s 00:06:58.204 sys 0m0.146s 00:06:58.204 02:27:06 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.204 02:27:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:58.204 ************************************ 00:06:58.204 END TEST thread_poller_perf 00:06:58.204 ************************************ 00:06:58.204 02:27:06 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:58.204 00:06:58.204 real 0m3.386s 00:06:58.204 user 0m3.004s 00:06:58.204 sys 0m0.381s 00:06:58.204 02:27:06 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.204 02:27:06 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.204 ************************************ 00:06:58.204 END TEST thread 00:06:58.204 ************************************ 00:06:58.204 02:27:06 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:58.204 02:27:06 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:58.204 02:27:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.204 02:27:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.204 02:27:06 -- common/autotest_common.sh@10 -- # set +x 00:06:58.204 ************************************ 00:06:58.204 START TEST app_cmdline 00:06:58.204 ************************************ 00:06:58.204 02:27:06 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:58.204 * Looking for test storage... 00:06:58.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:58.204 02:27:06 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:58.204 02:27:06 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:58.204 02:27:06 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:58.204 02:27:06 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.204 02:27:06 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:58.204 02:27:06 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.204 02:27:06 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:58.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.204 --rc genhtml_branch_coverage=1 00:06:58.204 --rc genhtml_function_coverage=1 00:06:58.204 --rc genhtml_legend=1 00:06:58.204 --rc geninfo_all_blocks=1 00:06:58.204 --rc geninfo_unexecuted_blocks=1 00:06:58.204 00:06:58.204 ' 00:06:58.204 02:27:06 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:58.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.204 --rc genhtml_branch_coverage=1 00:06:58.204 --rc genhtml_function_coverage=1 00:06:58.204 --rc genhtml_legend=1 00:06:58.204 --rc geninfo_all_blocks=1 00:06:58.204 --rc geninfo_unexecuted_blocks=1 00:06:58.204 00:06:58.204 ' 00:06:58.204 02:27:06 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:58.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.204 --rc genhtml_branch_coverage=1 00:06:58.204 --rc genhtml_function_coverage=1 00:06:58.204 --rc genhtml_legend=1 00:06:58.204 --rc geninfo_all_blocks=1 00:06:58.204 --rc geninfo_unexecuted_blocks=1 00:06:58.204 00:06:58.204 ' 00:06:58.204 02:27:06 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:58.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.204 --rc genhtml_branch_coverage=1 00:06:58.204 --rc genhtml_function_coverage=1 00:06:58.204 --rc genhtml_legend=1 00:06:58.204 --rc geninfo_all_blocks=1 00:06:58.204 --rc geninfo_unexecuted_blocks=1 00:06:58.204 00:06:58.204 ' 00:06:58.204 02:27:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:58.204 02:27:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2835691 00:06:58.204 02:27:06 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:58.204 02:27:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2835691 00:06:58.204 02:27:06 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2835691 ']' 00:06:58.204 02:27:06 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.204 02:27:06 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.204 02:27:06 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.204 02:27:06 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.204 02:27:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:58.204 [2024-11-17 02:27:06.621461] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:58.204 [2024-11-17 02:27:06.621606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835691 ] 00:06:58.463 [2024-11-17 02:27:06.763646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.463 [2024-11-17 02:27:06.898626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.838 02:27:07 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.838 02:27:07 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:59.838 02:27:07 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:59.838 { 00:06:59.838 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:06:59.838 "fields": { 00:06:59.838 "major": 25, 00:06:59.838 "minor": 1, 00:06:59.838 "patch": 0, 00:06:59.838 "suffix": "-pre", 00:06:59.838 "commit": "83e8405e4" 00:06:59.838 } 00:06:59.838 } 00:06:59.838 02:27:08 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:59.838 02:27:08 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:59.838 02:27:08 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:59.838 02:27:08 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:59.838 02:27:08 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:59.838 02:27:08 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.838 02:27:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.838 02:27:08 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:59.838 02:27:08 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:59.838 02:27:08 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.838 02:27:08 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:59.838 02:27:08 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:59.838 02:27:08 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.838 02:27:08 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:59.838 02:27:08 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.838 02:27:08 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.838 02:27:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.838 02:27:08 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.838 02:27:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.838 02:27:08 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.838 02:27:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.838 02:27:08 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.838 02:27:08 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:59.838 02:27:08 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.097 request: 00:07:00.097 { 00:07:00.097 "method": "env_dpdk_get_mem_stats", 00:07:00.097 "req_id": 1 00:07:00.097 } 00:07:00.097 Got JSON-RPC error response 00:07:00.097 response: 00:07:00.097 { 00:07:00.097 "code": -32601, 00:07:00.097 "message": "Method not found" 00:07:00.097 } 00:07:00.097 02:27:08 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:00.097 02:27:08 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:00.097 02:27:08 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:00.097 02:27:08 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:00.097 02:27:08 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2835691 00:07:00.097 02:27:08 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2835691 ']' 00:07:00.097 02:27:08 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2835691 00:07:00.097 02:27:08 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:00.097 02:27:08 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.097 02:27:08 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2835691 00:07:00.097 02:27:08 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.097 02:27:08 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.097 02:27:08 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2835691' 00:07:00.097 killing process with pid 2835691 00:07:00.097 02:27:08 app_cmdline -- common/autotest_common.sh@973 -- # kill 2835691 00:07:00.097 02:27:08 app_cmdline -- common/autotest_common.sh@978 -- # wait 2835691 00:07:02.624 00:07:02.624 real 0m4.506s 00:07:02.624 user 0m4.989s 00:07:02.624 sys 0m0.701s 00:07:02.624 02:27:10 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.624 02:27:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.624 ************************************ 00:07:02.624 END TEST app_cmdline 00:07:02.624 ************************************ 00:07:02.624 02:27:10 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:02.624 02:27:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.624 02:27:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.624 02:27:10 -- common/autotest_common.sh@10 -- # set +x 00:07:02.624 ************************************ 00:07:02.624 START TEST version 00:07:02.624 ************************************ 00:07:02.624 02:27:10 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:02.624 * Looking for test storage... 00:07:02.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:02.624 02:27:10 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:02.624 02:27:10 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:02.624 02:27:10 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:02.624 02:27:11 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:02.625 02:27:11 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.625 02:27:11 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.625 02:27:11 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.625 02:27:11 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.625 02:27:11 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.625 02:27:11 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.625 02:27:11 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.625 02:27:11 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.625 02:27:11 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.625 02:27:11 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.625 02:27:11 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.625 02:27:11 version -- scripts/common.sh@344 -- # case "$op" in 00:07:02.625 02:27:11 version -- scripts/common.sh@345 -- # : 1 00:07:02.625 02:27:11 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.625 02:27:11 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.625 02:27:11 version -- scripts/common.sh@365 -- # decimal 1 00:07:02.625 02:27:11 version -- scripts/common.sh@353 -- # local d=1 00:07:02.625 02:27:11 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.625 02:27:11 version -- scripts/common.sh@355 -- # echo 1 00:07:02.625 02:27:11 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.625 02:27:11 version -- scripts/common.sh@366 -- # decimal 2 00:07:02.625 02:27:11 version -- scripts/common.sh@353 -- # local d=2 00:07:02.625 02:27:11 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.625 02:27:11 version -- scripts/common.sh@355 -- # echo 2 00:07:02.625 02:27:11 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.625 02:27:11 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.625 02:27:11 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.625 02:27:11 version -- scripts/common.sh@368 -- # return 0 00:07:02.625 02:27:11 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.625 02:27:11 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:02.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.625 --rc genhtml_branch_coverage=1 00:07:02.625 --rc genhtml_function_coverage=1 00:07:02.625 --rc genhtml_legend=1 00:07:02.625 --rc geninfo_all_blocks=1 00:07:02.625 --rc geninfo_unexecuted_blocks=1 00:07:02.625 00:07:02.625 ' 00:07:02.625 02:27:11 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:02.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.625 --rc genhtml_branch_coverage=1 00:07:02.625 --rc genhtml_function_coverage=1 00:07:02.625 --rc genhtml_legend=1 00:07:02.625 --rc geninfo_all_blocks=1 00:07:02.625 --rc geninfo_unexecuted_blocks=1 00:07:02.625 00:07:02.625 ' 00:07:02.625 02:27:11 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:02.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.625 --rc genhtml_branch_coverage=1 00:07:02.625 --rc genhtml_function_coverage=1 00:07:02.625 --rc genhtml_legend=1 00:07:02.625 --rc geninfo_all_blocks=1 00:07:02.625 --rc geninfo_unexecuted_blocks=1 00:07:02.625 00:07:02.625 ' 00:07:02.625 02:27:11 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:02.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.625 --rc genhtml_branch_coverage=1 00:07:02.625 --rc genhtml_function_coverage=1 00:07:02.625 --rc genhtml_legend=1 00:07:02.625 --rc geninfo_all_blocks=1 00:07:02.625 --rc geninfo_unexecuted_blocks=1 00:07:02.625 00:07:02.625 ' 00:07:02.625 02:27:11 version -- app/version.sh@17 -- # get_header_version major 00:07:02.625 02:27:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:02.625 02:27:11 version -- app/version.sh@14 -- # cut -f2 00:07:02.625 02:27:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.884 02:27:11 version -- app/version.sh@17 -- # major=25 00:07:02.884 02:27:11 version -- app/version.sh@18 -- # get_header_version minor 00:07:02.884 02:27:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:02.884 02:27:11 version -- app/version.sh@14 -- # cut -f2 00:07:02.884 02:27:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.884 02:27:11 version -- app/version.sh@18 -- # minor=1 00:07:02.884 02:27:11 version -- app/version.sh@19 -- # get_header_version patch 00:07:02.884 02:27:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:02.884 02:27:11 version -- app/version.sh@14 -- # cut -f2 00:07:02.884 02:27:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.884 02:27:11 version -- app/version.sh@19 -- # patch=0 00:07:02.884 02:27:11 version -- app/version.sh@20 -- # get_header_version suffix 00:07:02.884 02:27:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:02.884 02:27:11 version -- app/version.sh@14 -- # cut -f2 00:07:02.884 02:27:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.884 02:27:11 version -- app/version.sh@20 -- # suffix=-pre 00:07:02.884 02:27:11 version -- app/version.sh@22 -- # version=25.1 00:07:02.884 02:27:11 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:02.884 02:27:11 version -- app/version.sh@28 -- # version=25.1rc0 00:07:02.884 02:27:11 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:02.884 02:27:11 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:02.884 02:27:11 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:02.884 02:27:11 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:02.884 00:07:02.884 real 0m0.198s 00:07:02.884 user 0m0.131s 00:07:02.884 sys 0m0.092s 00:07:02.884 02:27:11 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.884 02:27:11 version -- common/autotest_common.sh@10 -- # set +x 00:07:02.884 ************************************ 00:07:02.884 END TEST version 00:07:02.884 ************************************ 00:07:02.884 02:27:11 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:02.884 02:27:11 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:02.884 02:27:11 -- spdk/autotest.sh@194 -- # uname -s 00:07:02.884 02:27:11 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:02.884 02:27:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:02.884 02:27:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:02.884 02:27:11 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:02.884 02:27:11 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:02.884 02:27:11 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:02.884 02:27:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:02.884 02:27:11 -- common/autotest_common.sh@10 -- # set +x 00:07:02.884 02:27:11 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:02.884 02:27:11 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:02.884 02:27:11 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:02.884 02:27:11 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:02.884 02:27:11 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:02.884 02:27:11 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:02.884 02:27:11 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:02.884 02:27:11 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:02.884 02:27:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.884 02:27:11 -- common/autotest_common.sh@10 -- # set +x 00:07:02.884 ************************************ 00:07:02.884 START TEST nvmf_tcp 00:07:02.884 ************************************ 00:07:02.884 02:27:11 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:02.884 * Looking for test storage... 00:07:02.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:02.884 02:27:11 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:02.884 02:27:11 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:02.884 02:27:11 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:02.884 02:27:11 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:02.884 02:27:11 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.884 02:27:11 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.884 02:27:11 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.884 02:27:11 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.884 02:27:11 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.884 02:27:11 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.884 02:27:11 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.884 02:27:11 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.884 02:27:11 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.884 02:27:11 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.884 02:27:11 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.884 02:27:11 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:02.884 02:27:11 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:02.884 02:27:11 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.884 02:27:11 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.144 02:27:11 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:03.144 02:27:11 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:03.144 02:27:11 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.144 02:27:11 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:03.144 02:27:11 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.144 02:27:11 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:03.144 02:27:11 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:03.144 02:27:11 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.144 02:27:11 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:03.144 02:27:11 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.144 02:27:11 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.144 02:27:11 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.144 02:27:11 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:03.144 02:27:11 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.144 02:27:11 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:03.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.144 --rc genhtml_branch_coverage=1 00:07:03.144 --rc genhtml_function_coverage=1 00:07:03.144 --rc genhtml_legend=1 00:07:03.144 --rc geninfo_all_blocks=1 00:07:03.144 --rc geninfo_unexecuted_blocks=1 00:07:03.144 00:07:03.144 ' 00:07:03.144 02:27:11 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:03.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.144 --rc genhtml_branch_coverage=1 00:07:03.144 --rc genhtml_function_coverage=1 00:07:03.144 --rc genhtml_legend=1 00:07:03.144 --rc geninfo_all_blocks=1 00:07:03.144 --rc geninfo_unexecuted_blocks=1 00:07:03.144 00:07:03.144 ' 00:07:03.144 02:27:11 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:03.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.144 --rc genhtml_branch_coverage=1 00:07:03.144 --rc genhtml_function_coverage=1 00:07:03.144 --rc genhtml_legend=1 00:07:03.144 --rc geninfo_all_blocks=1 00:07:03.144 --rc geninfo_unexecuted_blocks=1 00:07:03.144 00:07:03.144 ' 00:07:03.144 02:27:11 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:03.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.144 --rc genhtml_branch_coverage=1 00:07:03.144 --rc genhtml_function_coverage=1 00:07:03.144 --rc genhtml_legend=1 00:07:03.144 --rc geninfo_all_blocks=1 00:07:03.144 --rc geninfo_unexecuted_blocks=1 00:07:03.144 00:07:03.144 ' 00:07:03.144 02:27:11 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:03.144 02:27:11 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:03.144 02:27:11 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:03.144 02:27:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:03.144 02:27:11 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.144 02:27:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:03.144 ************************************ 00:07:03.144 START TEST nvmf_target_core 00:07:03.144 ************************************ 00:07:03.144 02:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:03.144 * Looking for test storage... 00:07:03.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:03.144 02:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:03.144 02:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:03.144 02:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:03.144 02:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:03.144 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.144 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.144 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.144 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.144 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.144 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.144 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.144 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.144 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.144 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.144 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.144 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:03.144 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:03.144 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.144 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.144 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:03.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.145 --rc genhtml_branch_coverage=1 00:07:03.145 --rc genhtml_function_coverage=1 00:07:03.145 --rc genhtml_legend=1 00:07:03.145 --rc geninfo_all_blocks=1 00:07:03.145 --rc geninfo_unexecuted_blocks=1 00:07:03.145 00:07:03.145 ' 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:03.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.145 --rc genhtml_branch_coverage=1 00:07:03.145 --rc genhtml_function_coverage=1 00:07:03.145 --rc genhtml_legend=1 00:07:03.145 --rc geninfo_all_blocks=1 00:07:03.145 --rc geninfo_unexecuted_blocks=1 00:07:03.145 00:07:03.145 ' 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:03.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.145 --rc genhtml_branch_coverage=1 00:07:03.145 --rc genhtml_function_coverage=1 00:07:03.145 --rc genhtml_legend=1 00:07:03.145 --rc geninfo_all_blocks=1 00:07:03.145 --rc geninfo_unexecuted_blocks=1 00:07:03.145 00:07:03.145 ' 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:03.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.145 --rc genhtml_branch_coverage=1 00:07:03.145 --rc genhtml_function_coverage=1 00:07:03.145 --rc genhtml_legend=1 00:07:03.145 --rc geninfo_all_blocks=1 00:07:03.145 --rc geninfo_unexecuted_blocks=1 00:07:03.145 00:07:03.145 ' 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:03.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:03.145 ************************************ 00:07:03.145 START TEST nvmf_abort 00:07:03.145 ************************************ 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:03.145 * Looking for test storage... 00:07:03.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:03.145 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:03.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.405 --rc genhtml_branch_coverage=1 00:07:03.405 --rc genhtml_function_coverage=1 00:07:03.405 --rc genhtml_legend=1 00:07:03.405 --rc geninfo_all_blocks=1 00:07:03.405 --rc geninfo_unexecuted_blocks=1 00:07:03.405 00:07:03.405 ' 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:03.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.405 --rc genhtml_branch_coverage=1 00:07:03.405 --rc genhtml_function_coverage=1 00:07:03.405 --rc genhtml_legend=1 00:07:03.405 --rc geninfo_all_blocks=1 00:07:03.405 --rc geninfo_unexecuted_blocks=1 00:07:03.405 00:07:03.405 ' 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:03.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.405 --rc genhtml_branch_coverage=1 00:07:03.405 --rc genhtml_function_coverage=1 00:07:03.405 --rc genhtml_legend=1 00:07:03.405 --rc geninfo_all_blocks=1 00:07:03.405 --rc geninfo_unexecuted_blocks=1 00:07:03.405 00:07:03.405 ' 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:03.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.405 --rc genhtml_branch_coverage=1 00:07:03.405 --rc genhtml_function_coverage=1 00:07:03.405 --rc genhtml_legend=1 00:07:03.405 --rc geninfo_all_blocks=1 00:07:03.405 --rc geninfo_unexecuted_blocks=1 00:07:03.405 00:07:03.405 ' 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:03.405 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.406 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.406 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.406 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:03.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:03.406 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:03.406 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:03.406 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:03.406 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:03.406 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:03.406 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:03.406 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:03.406 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:03.406 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:03.406 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:03.406 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:03.406 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.406 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:03.406 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.406 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:03.406 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:03.406 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:03.406 02:27:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:05.307 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:05.307 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:05.307 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:05.308 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:05.308 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:05.308 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:05.566 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:05.566 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:05.566 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:05.566 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:05.566 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:05.566 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:05.566 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:05.566 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:05.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:05.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:07:05.566 00:07:05.566 --- 10.0.0.2 ping statistics --- 00:07:05.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.566 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:07:05.566 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:05.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:05.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:07:05.567 00:07:05.567 --- 10.0.0.1 ping statistics --- 00:07:05.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.567 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:07:05.567 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:05.567 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:05.567 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:05.567 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:05.567 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:05.567 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:05.567 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:05.567 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:05.567 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:05.567 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:05.567 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:05.567 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:05.567 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.567 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2838671 00:07:05.567 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:05.567 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2838671 00:07:05.567 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2838671 ']' 00:07:05.567 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.567 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.567 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.567 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.567 02:27:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.567 [2024-11-17 02:27:13.967807] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:05.567 [2024-11-17 02:27:13.967955] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.825 [2024-11-17 02:27:14.133314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.825 [2024-11-17 02:27:14.275732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:05.825 [2024-11-17 02:27:14.275809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:05.825 [2024-11-17 02:27:14.275835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:05.825 [2024-11-17 02:27:14.275859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:05.825 [2024-11-17 02:27:14.275879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:05.825 [2024-11-17 02:27:14.278527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.825 [2024-11-17 02:27:14.278635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.825 [2024-11-17 02:27:14.278638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.760 02:27:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.760 02:27:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:06.760 02:27:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:06.760 02:27:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.760 02:27:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.760 [2024-11-17 02:27:15.025793] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.760 Malloc0 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.760 Delay0 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.760 [2024-11-17 02:27:15.145532] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.760 02:27:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:07.018 [2024-11-17 02:27:15.364255] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:09.547 Initializing NVMe Controllers 00:07:09.547 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:09.547 controller IO queue size 128 less than required 00:07:09.547 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:09.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:09.547 Initialization complete. Launching workers. 00:07:09.547 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 21694 00:07:09.547 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 21751, failed to submit 66 00:07:09.547 success 21694, unsuccessful 57, failed 0 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:09.547 rmmod nvme_tcp 00:07:09.547 rmmod nvme_fabrics 00:07:09.547 rmmod nvme_keyring 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2838671 ']' 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2838671 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2838671 ']' 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2838671 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2838671 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2838671' 00:07:09.547 killing process with pid 2838671 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2838671 00:07:09.547 02:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2838671 00:07:10.483 02:27:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:10.483 02:27:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:10.483 02:27:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:10.483 02:27:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:10.483 02:27:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:10.483 02:27:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:10.483 02:27:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:10.483 02:27:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:10.483 02:27:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:10.483 02:27:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.483 02:27:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.483 02:27:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.019 02:27:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:13.019 00:07:13.019 real 0m9.312s 00:07:13.019 user 0m15.735s 00:07:13.019 sys 0m2.751s 00:07:13.019 02:27:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.019 02:27:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.019 ************************************ 00:07:13.019 END TEST nvmf_abort 00:07:13.019 ************************************ 00:07:13.019 02:27:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:13.019 02:27:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:13.019 02:27:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.019 02:27:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:13.019 ************************************ 00:07:13.019 START TEST nvmf_ns_hotplug_stress 00:07:13.019 ************************************ 00:07:13.019 02:27:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:13.019 * Looking for test storage... 00:07:13.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.019 02:27:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:13.019 02:27:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:13.019 02:27:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.019 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:13.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.020 --rc genhtml_branch_coverage=1 00:07:13.020 --rc genhtml_function_coverage=1 00:07:13.020 --rc genhtml_legend=1 00:07:13.020 --rc geninfo_all_blocks=1 00:07:13.020 --rc geninfo_unexecuted_blocks=1 00:07:13.020 00:07:13.020 ' 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:13.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.020 --rc genhtml_branch_coverage=1 00:07:13.020 --rc genhtml_function_coverage=1 00:07:13.020 --rc genhtml_legend=1 00:07:13.020 --rc geninfo_all_blocks=1 00:07:13.020 --rc geninfo_unexecuted_blocks=1 00:07:13.020 00:07:13.020 ' 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:13.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.020 --rc genhtml_branch_coverage=1 00:07:13.020 --rc genhtml_function_coverage=1 00:07:13.020 --rc genhtml_legend=1 00:07:13.020 --rc geninfo_all_blocks=1 00:07:13.020 --rc geninfo_unexecuted_blocks=1 00:07:13.020 00:07:13.020 ' 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:13.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.020 --rc genhtml_branch_coverage=1 00:07:13.020 --rc genhtml_function_coverage=1 00:07:13.020 --rc genhtml_legend=1 00:07:13.020 --rc geninfo_all_blocks=1 00:07:13.020 --rc geninfo_unexecuted_blocks=1 00:07:13.020 00:07:13.020 ' 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:13.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.020 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:13.021 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:13.021 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:13.021 02:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:14.923 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:14.924 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:14.924 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:14.924 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:14.924 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:14.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:14.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:07:14.924 00:07:14.924 --- 10.0.0.2 ping statistics --- 00:07:14.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.924 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:14.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:14.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:07:14.924 00:07:14.924 --- 10.0.0.1 ping statistics --- 00:07:14.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.924 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2841183 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2841183 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2841183 ']' 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.924 02:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:15.183 [2024-11-17 02:27:23.405110] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:15.183 [2024-11-17 02:27:23.405277] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.183 [2024-11-17 02:27:23.554905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.441 [2024-11-17 02:27:23.694937] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.441 [2024-11-17 02:27:23.695023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.441 [2024-11-17 02:27:23.695050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.441 [2024-11-17 02:27:23.695074] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.441 [2024-11-17 02:27:23.695106] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.441 [2024-11-17 02:27:23.697857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.441 [2024-11-17 02:27:23.697912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.441 [2024-11-17 02:27:23.697917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.008 02:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.008 02:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:16.008 02:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:16.008 02:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:16.008 02:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:16.008 02:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.008 02:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:16.008 02:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:16.265 [2024-11-17 02:27:24.662113] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.265 02:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:16.523 02:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:17.088 [2024-11-17 02:27:25.244302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.088 02:27:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:17.088 02:27:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:17.654 Malloc0 00:07:17.654 02:27:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:17.654 Delay0 00:07:17.654 02:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.220 02:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:18.220 NULL1 00:07:18.220 02:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:18.478 02:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2841616 00:07:18.478 02:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:18.478 02:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:18.478 02:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.850 Read completed with error (sct=0, sc=11) 00:07:19.850 02:27:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.107 02:27:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:20.107 02:27:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:20.365 true 00:07:20.624 02:27:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:20.624 02:27:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.190 02:27:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.448 02:27:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:21.448 02:27:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:21.706 true 00:07:21.706 02:27:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:21.706 02:27:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.965 02:27:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.223 02:27:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:22.223 02:27:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:22.481 true 00:07:22.738 02:27:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:22.738 02:27:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.996 02:27:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.253 02:27:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:23.253 02:27:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:23.513 true 00:07:23.513 02:27:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:23.513 02:27:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.447 02:27:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.705 02:27:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:24.705 02:27:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:24.963 true 00:07:24.963 02:27:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:24.963 02:27:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.222 02:27:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.479 02:27:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:25.479 02:27:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:25.737 true 00:07:25.737 02:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:25.737 02:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.995 02:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.254 02:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:26.254 02:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:26.512 true 00:07:26.512 02:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:26.512 02:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.446 02:27:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.704 02:27:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:27.704 02:27:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:28.355 true 00:07:28.355 02:27:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:28.355 02:27:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.355 02:27:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.692 02:27:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:28.692 02:27:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:28.950 true 00:07:28.950 02:27:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:28.950 02:27:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.208 02:27:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.466 02:27:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:29.466 02:27:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:29.724 true 00:07:29.724 02:27:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:29.725 02:27:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.658 02:27:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.915 02:27:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:30.915 02:27:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:31.172 true 00:07:31.172 02:27:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:31.172 02:27:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.431 02:27:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.689 02:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:31.689 02:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:31.946 true 00:07:32.204 02:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:32.204 02:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.462 02:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.721 02:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:32.721 02:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:32.978 true 00:07:32.978 02:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:32.978 02:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.913 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.913 02:27:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.171 02:27:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:34.171 02:27:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:34.429 true 00:07:34.429 02:27:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:34.429 02:27:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.687 02:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.945 02:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:34.945 02:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:35.202 true 00:07:35.202 02:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:35.202 02:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.460 02:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.718 02:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:35.718 02:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:35.976 true 00:07:35.976 02:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:35.976 02:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.911 02:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.477 02:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:37.477 02:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:37.734 true 00:07:37.734 02:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:37.734 02:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.992 02:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.251 02:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:38.251 02:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:38.508 true 00:07:38.508 02:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:38.508 02:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.766 02:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.024 02:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:39.024 02:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:39.281 true 00:07:39.281 02:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:39.281 02:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.214 02:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.472 02:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:40.472 02:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:40.730 true 00:07:40.730 02:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:40.730 02:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.294 02:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.552 02:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:41.552 02:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:41.810 true 00:07:41.810 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:41.810 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.067 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.325 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:42.325 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:42.584 true 00:07:42.584 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:42.584 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.519 02:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.519 02:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:43.519 02:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:43.777 true 00:07:44.035 02:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:44.035 02:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.292 02:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.550 02:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:44.550 02:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:44.808 true 00:07:44.808 02:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:44.808 02:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.066 02:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.323 02:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:45.323 02:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:45.581 true 00:07:45.581 02:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:45.581 02:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.515 02:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.773 02:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:46.773 02:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:47.031 true 00:07:47.031 02:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:47.031 02:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.289 02:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.547 02:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:47.547 02:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:47.805 true 00:07:47.805 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:47.805 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.063 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.629 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:48.629 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:48.629 true 00:07:48.629 02:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:48.629 02:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.563 Initializing NVMe Controllers 00:07:49.563 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:49.563 Controller IO queue size 128, less than required. 00:07:49.563 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:49.563 Controller IO queue size 128, less than required. 00:07:49.563 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:49.563 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:49.563 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:49.563 Initialization complete. Launching workers. 00:07:49.563 ======================================================== 00:07:49.563 Latency(us) 00:07:49.563 Device Information : IOPS MiB/s Average min max 00:07:49.563 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 497.03 0.24 105407.94 4085.40 1019310.50 00:07:49.563 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6216.12 3.04 20592.74 5616.38 492790.95 00:07:49.563 ======================================================== 00:07:49.563 Total : 6713.15 3.28 26872.30 4085.40 1019310.50 00:07:49.563 00:07:49.563 02:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.821 02:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:49.821 02:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:50.079 true 00:07:50.079 02:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841616 00:07:50.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2841616) - No such process 00:07:50.079 02:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2841616 00:07:50.079 02:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.337 02:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:50.595 02:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:50.595 02:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:50.595 02:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:50.595 02:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:50.595 02:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:50.852 null0 00:07:50.852 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:50.852 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:50.852 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:51.109 null1 00:07:51.109 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:51.109 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:51.109 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:51.367 null2 00:07:51.367 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:51.367 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:51.367 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:51.625 null3 00:07:51.625 02:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:51.625 02:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:51.625 02:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:51.882 null4 00:07:52.139 02:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.139 02:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.139 02:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:52.139 null5 00:07:52.396 02:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.396 02:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.396 02:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:52.654 null6 00:07:52.654 02:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.654 02:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.654 02:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:52.912 null7 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2845809 2845810 2845812 2845814 2845816 2845818 2845820 2845822 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.912 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:53.170 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:53.170 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:53.170 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:53.170 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.170 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.170 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:53.170 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:53.170 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.429 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:53.687 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:53.687 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:53.687 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:53.687 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.687 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:53.687 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:53.687 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.687 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.947 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.206 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.465 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.465 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.465 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.465 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:54.465 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.465 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:54.465 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.723 02:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.981 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.981 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.981 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.981 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.981 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:54.981 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:54.981 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.981 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.240 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.498 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.498 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.498 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.498 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.499 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.499 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.499 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.499 02:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.757 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:56.016 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:56.275 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:56.275 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.275 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:56.275 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:56.275 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.275 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.275 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.533 02:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:56.820 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:56.820 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.820 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.820 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:56.820 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:56.820 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:56.820 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.820 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.110 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.389 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.389 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:57.389 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:57.389 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:57.389 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:57.389 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.389 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:57.389 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.647 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.647 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.647 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:57.647 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.647 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.647 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:57.647 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.647 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.647 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:57.647 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.647 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.647 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:57.647 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.647 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.647 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:57.647 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.647 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.647 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:57.647 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.647 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.647 02:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:57.647 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.647 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.647 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.906 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.906 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:57.906 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:57.906 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:57.906 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:57.906 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.906 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:57.906 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.164 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.164 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.164 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:58.164 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.164 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.165 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:58.165 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.165 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.165 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:58.165 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.165 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.165 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:58.165 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.165 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.165 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:58.165 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.165 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.165 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:58.165 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.165 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.165 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:58.165 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.165 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.165 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:58.732 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:58.732 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:58.732 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:58.732 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:58.732 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:58.732 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:58.732 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.732 02:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:58.732 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.732 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.732 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.732 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.991 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.991 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.991 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.991 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:58.992 rmmod nvme_tcp 00:07:58.992 rmmod nvme_fabrics 00:07:58.992 rmmod nvme_keyring 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2841183 ']' 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2841183 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2841183 ']' 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2841183 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2841183 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2841183' 00:07:58.992 killing process with pid 2841183 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2841183 00:07:58.992 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2841183 00:08:00.367 02:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:00.367 02:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:00.367 02:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:00.367 02:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:00.367 02:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:00.367 02:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:00.367 02:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:00.367 02:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:00.367 02:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:00.367 02:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.367 02:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.367 02:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:02.271 00:08:02.271 real 0m49.611s 00:08:02.271 user 3m47.609s 00:08:02.271 sys 0m15.948s 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:02.271 ************************************ 00:08:02.271 END TEST nvmf_ns_hotplug_stress 00:08:02.271 ************************************ 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:02.271 ************************************ 00:08:02.271 START TEST nvmf_delete_subsystem 00:08:02.271 ************************************ 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:02.271 * Looking for test storage... 00:08:02.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:02.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.271 --rc genhtml_branch_coverage=1 00:08:02.271 --rc genhtml_function_coverage=1 00:08:02.271 --rc genhtml_legend=1 00:08:02.271 --rc geninfo_all_blocks=1 00:08:02.271 --rc geninfo_unexecuted_blocks=1 00:08:02.271 00:08:02.271 ' 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:02.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.271 --rc genhtml_branch_coverage=1 00:08:02.271 --rc genhtml_function_coverage=1 00:08:02.271 --rc genhtml_legend=1 00:08:02.271 --rc geninfo_all_blocks=1 00:08:02.271 --rc geninfo_unexecuted_blocks=1 00:08:02.271 00:08:02.271 ' 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:02.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.271 --rc genhtml_branch_coverage=1 00:08:02.271 --rc genhtml_function_coverage=1 00:08:02.271 --rc genhtml_legend=1 00:08:02.271 --rc geninfo_all_blocks=1 00:08:02.271 --rc geninfo_unexecuted_blocks=1 00:08:02.271 00:08:02.271 ' 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:02.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.271 --rc genhtml_branch_coverage=1 00:08:02.271 --rc genhtml_function_coverage=1 00:08:02.271 --rc genhtml_legend=1 00:08:02.271 --rc geninfo_all_blocks=1 00:08:02.271 --rc geninfo_unexecuted_blocks=1 00:08:02.271 00:08:02.271 ' 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.271 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:02.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:02.272 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:02.530 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:02.530 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:02.530 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.530 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:02.530 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:02.530 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:02.530 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.530 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.530 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.530 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:02.530 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:02.531 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:02.531 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.432 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:04.433 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:04.433 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:04.433 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:04.433 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:04.433 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:04.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:08:04.692 00:08:04.692 --- 10.0.0.2 ping statistics --- 00:08:04.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.692 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:08:04.692 00:08:04.692 --- 10.0.0.1 ping statistics --- 00:08:04.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.692 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2848727 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2848727 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2848727 ']' 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.692 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.692 [2024-11-17 02:28:13.048604] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:04.692 [2024-11-17 02:28:13.048759] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.950 [2024-11-17 02:28:13.206234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:04.950 [2024-11-17 02:28:13.347449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.950 [2024-11-17 02:28:13.347531] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.950 [2024-11-17 02:28:13.347557] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.950 [2024-11-17 02:28:13.347582] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.950 [2024-11-17 02:28:13.347602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.950 [2024-11-17 02:28:13.350222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.950 [2024-11-17 02:28:13.350223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.883 [2024-11-17 02:28:14.054634] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.883 [2024-11-17 02:28:14.072473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.883 NULL1 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.883 Delay0 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2848883 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:05.883 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:05.883 [2024-11-17 02:28:14.206881] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:07.783 02:28:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.783 02:28:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.783 02:28:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 starting I/O failed: -6 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 [2024-11-17 02:28:16.468330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Write completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 starting I/O failed: -6 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.043 Read completed with error (sct=0, sc=8) 00:08:08.044 Write completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Write completed with error (sct=0, sc=8) 00:08:08.044 starting I/O failed: -6 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Write completed with error (sct=0, sc=8) 00:08:08.044 Write completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 starting I/O failed: -6 00:08:08.044 Write completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Write completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Write completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Write completed with error (sct=0, sc=8) 00:08:08.044 starting I/O failed: -6 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Write completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Write completed with error (sct=0, sc=8) 00:08:08.044 Write completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 starting I/O failed: -6 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Write completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 starting I/O failed: -6 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Write completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Write completed with error (sct=0, sc=8) 00:08:08.044 starting I/O failed: -6 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Write completed with error (sct=0, sc=8) 00:08:08.044 Write completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 starting I/O failed: -6 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Read completed with error (sct=0, sc=8) 00:08:08.044 Write completed with error (sct=0, sc=8) 00:08:08.044 starting I/O failed: -6 00:08:08.044 [2024-11-17 02:28:16.469747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:08:08.978 [2024-11-17 02:28:17.429549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015c00 is same with the state(6) to be set 00:08:09.236 Read completed with error (sct=0, sc=8) 00:08:09.236 Read completed with error (sct=0, sc=8) 00:08:09.236 Read completed with error (sct=0, sc=8) 00:08:09.236 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 [2024-11-17 02:28:17.470524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 [2024-11-17 02:28:17.471303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(6) to be set 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 [2024-11-17 02:28:17.471987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016b00 is same with the state(6) to be set 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Write completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 Read completed with error (sct=0, sc=8) 00:08:09.237 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.237 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:09.237 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2848883 00:08:09.237 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:09.237 [2024-11-17 02:28:17.477460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:08:09.237 Initializing NVMe Controllers 00:08:09.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:09.237 Controller IO queue size 128, less than required. 00:08:09.237 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:09.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:09.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:09.237 Initialization complete. Launching workers. 00:08:09.237 ======================================================== 00:08:09.237 Latency(us) 00:08:09.237 Device Information : IOPS MiB/s Average min max 00:08:09.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 190.55 0.09 949457.26 1793.90 1017396.57 00:08:09.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 167.29 0.08 853766.82 1019.05 1015869.53 00:08:09.238 ======================================================== 00:08:09.238 Total : 357.83 0.17 904722.31 1019.05 1017396.57 00:08:09.238 00:08:09.238 [2024-11-17 02:28:17.479173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015c00 (9): Bad file descriptor 00:08:09.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:09.804 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:09.804 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2848883 00:08:09.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2848883) - No such process 00:08:09.804 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2848883 00:08:09.804 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:09.804 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2848883 00:08:09.804 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:09.804 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.804 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:09.804 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.804 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2848883 00:08:09.804 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:09.804 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:09.804 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:09.804 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:09.804 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:09.804 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.804 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.804 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.804 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.805 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.805 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.805 [2024-11-17 02:28:17.995879] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.805 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.805 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.805 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.805 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.805 02:28:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.805 02:28:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2849409 00:08:09.805 02:28:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:09.805 02:28:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:09.805 02:28:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849409 00:08:09.805 02:28:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:09.805 [2024-11-17 02:28:18.122334] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:10.063 02:28:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:10.063 02:28:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849409 00:08:10.063 02:28:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:10.629 02:28:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:10.629 02:28:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849409 00:08:10.629 02:28:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:11.195 02:28:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:11.195 02:28:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849409 00:08:11.195 02:28:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:11.762 02:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:11.762 02:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849409 00:08:11.762 02:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:12.328 02:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:12.328 02:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849409 00:08:12.328 02:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:12.586 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:12.586 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849409 00:08:12.586 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:12.845 Initializing NVMe Controllers 00:08:12.845 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:12.845 Controller IO queue size 128, less than required. 00:08:12.845 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:12.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:12.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:12.845 Initialization complete. Launching workers. 00:08:12.845 ======================================================== 00:08:12.845 Latency(us) 00:08:12.845 Device Information : IOPS MiB/s Average min max 00:08:12.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005156.21 1000227.47 1014630.35 00:08:12.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006300.63 1000306.60 1016550.40 00:08:12.845 ======================================================== 00:08:12.845 Total : 256.00 0.12 1005728.42 1000227.47 1016550.40 00:08:12.845 00:08:13.103 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:13.103 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849409 00:08:13.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2849409) - No such process 00:08:13.103 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2849409 00:08:13.103 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:13.103 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:13.103 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:13.103 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:13.103 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:13.103 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:13.103 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:13.103 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:13.103 rmmod nvme_tcp 00:08:13.103 rmmod nvme_fabrics 00:08:13.103 rmmod nvme_keyring 00:08:13.361 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:13.361 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:13.361 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:13.361 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2848727 ']' 00:08:13.361 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2848727 00:08:13.361 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2848727 ']' 00:08:13.361 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2848727 00:08:13.361 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:13.361 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.361 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2848727 00:08:13.361 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.361 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.361 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2848727' 00:08:13.361 killing process with pid 2848727 00:08:13.361 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2848727 00:08:13.361 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2848727 00:08:14.297 02:28:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:14.297 02:28:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:14.297 02:28:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:14.297 02:28:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:14.297 02:28:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:14.297 02:28:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:14.297 02:28:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:14.556 02:28:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:14.556 02:28:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:14.556 02:28:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.556 02:28:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.556 02:28:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.463 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:16.463 00:08:16.463 real 0m14.238s 00:08:16.463 user 0m31.184s 00:08:16.463 sys 0m3.320s 00:08:16.463 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.463 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.463 ************************************ 00:08:16.463 END TEST nvmf_delete_subsystem 00:08:16.463 ************************************ 00:08:16.463 02:28:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:16.463 02:28:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:16.463 02:28:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.463 02:28:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:16.463 ************************************ 00:08:16.463 START TEST nvmf_host_management 00:08:16.463 ************************************ 00:08:16.463 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:16.463 * Looking for test storage... 00:08:16.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.463 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:16.463 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:16.463 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.723 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:16.724 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.724 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:16.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.724 --rc genhtml_branch_coverage=1 00:08:16.724 --rc genhtml_function_coverage=1 00:08:16.724 --rc genhtml_legend=1 00:08:16.724 --rc geninfo_all_blocks=1 00:08:16.724 --rc geninfo_unexecuted_blocks=1 00:08:16.724 00:08:16.724 ' 00:08:16.724 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:16.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.724 --rc genhtml_branch_coverage=1 00:08:16.724 --rc genhtml_function_coverage=1 00:08:16.724 --rc genhtml_legend=1 00:08:16.724 --rc geninfo_all_blocks=1 00:08:16.724 --rc geninfo_unexecuted_blocks=1 00:08:16.724 00:08:16.724 ' 00:08:16.724 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:16.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.724 --rc genhtml_branch_coverage=1 00:08:16.724 --rc genhtml_function_coverage=1 00:08:16.724 --rc genhtml_legend=1 00:08:16.724 --rc geninfo_all_blocks=1 00:08:16.724 --rc geninfo_unexecuted_blocks=1 00:08:16.724 00:08:16.724 ' 00:08:16.724 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:16.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.724 --rc genhtml_branch_coverage=1 00:08:16.724 --rc genhtml_function_coverage=1 00:08:16.724 --rc genhtml_legend=1 00:08:16.724 --rc geninfo_all_blocks=1 00:08:16.724 --rc geninfo_unexecuted_blocks=1 00:08:16.724 00:08:16.724 ' 00:08:16.724 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:16.724 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:16.724 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.724 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.724 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.724 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.724 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.724 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:16.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:16.724 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.627 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:18.627 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:18.627 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:18.627 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:18.627 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:18.627 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:18.627 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:18.627 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:18.627 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:18.627 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:18.627 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:18.627 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:18.627 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:18.627 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:18.628 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:18.628 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.628 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.628 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.628 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.628 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:18.628 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:18.628 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:18.628 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:18.628 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:18.628 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:18.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:08:18.887 00:08:18.887 --- 10.0.0.2 ping statistics --- 00:08:18.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.887 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:18.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:08:18.887 00:08:18.887 --- 10.0.0.1 ping statistics --- 00:08:18.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.887 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2851894 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2851894 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2851894 ']' 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.887 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.887 [2024-11-17 02:28:27.316061] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:18.887 [2024-11-17 02:28:27.316257] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.145 [2024-11-17 02:28:27.466883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.404 [2024-11-17 02:28:27.607234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.404 [2024-11-17 02:28:27.607315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.404 [2024-11-17 02:28:27.607341] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.404 [2024-11-17 02:28:27.607366] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.404 [2024-11-17 02:28:27.607386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.404 [2024-11-17 02:28:27.610585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.404 [2024-11-17 02:28:27.610699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.404 [2024-11-17 02:28:27.610745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.404 [2024-11-17 02:28:27.610749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:19.971 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.971 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:19.971 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:19.971 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:19.971 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.971 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.971 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:19.971 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.971 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.971 [2024-11-17 02:28:28.303278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.971 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.971 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:19.971 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:19.971 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.971 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:19.971 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:19.971 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:19.971 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.971 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.971 Malloc0 00:08:19.971 [2024-11-17 02:28:28.426809] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.230 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.230 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:20.230 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:20.230 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.230 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2852073 00:08:20.230 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2852073 /var/tmp/bdevperf.sock 00:08:20.230 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2852073 ']' 00:08:20.230 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:20.230 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:20.230 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:20.230 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.230 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:20.230 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:20.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:20.230 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:20.230 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.230 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:20.230 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.230 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:20.230 { 00:08:20.230 "params": { 00:08:20.230 "name": "Nvme$subsystem", 00:08:20.230 "trtype": "$TEST_TRANSPORT", 00:08:20.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:20.230 "adrfam": "ipv4", 00:08:20.230 "trsvcid": "$NVMF_PORT", 00:08:20.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:20.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:20.230 "hdgst": ${hdgst:-false}, 00:08:20.230 "ddgst": ${ddgst:-false} 00:08:20.230 }, 00:08:20.230 "method": "bdev_nvme_attach_controller" 00:08:20.230 } 00:08:20.230 EOF 00:08:20.230 )") 00:08:20.230 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:20.230 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:20.230 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:20.230 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:20.230 "params": { 00:08:20.230 "name": "Nvme0", 00:08:20.230 "trtype": "tcp", 00:08:20.230 "traddr": "10.0.0.2", 00:08:20.230 "adrfam": "ipv4", 00:08:20.230 "trsvcid": "4420", 00:08:20.230 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:20.230 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:20.230 "hdgst": false, 00:08:20.230 "ddgst": false 00:08:20.230 }, 00:08:20.230 "method": "bdev_nvme_attach_controller" 00:08:20.230 }' 00:08:20.230 [2024-11-17 02:28:28.545357] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:20.230 [2024-11-17 02:28:28.545498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852073 ] 00:08:20.230 [2024-11-17 02:28:28.681420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.488 [2024-11-17 02:28:28.810349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.055 Running I/O for 10 seconds... 00:08:21.055 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.055 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:21.055 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:21.055 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.055 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:21.315 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.315 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:21.315 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:21.315 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:21.315 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:21.315 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:21.315 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:21.315 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:21.315 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:21.315 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:21.315 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:21.315 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.315 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:21.315 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.315 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=131 00:08:21.315 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 131 -ge 100 ']' 00:08:21.315 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:21.315 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:21.315 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:21.315 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:21.315 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.315 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:21.315 [2024-11-17 02:28:29.570525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.570617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.570639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.570659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.570677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.570695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.570713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.570730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.570748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.570766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.570784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.570802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.570820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.570838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.570855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.570873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.570891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.570909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.570927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.570945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.570962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.570980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.570998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.571015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.571033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.571051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.571081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.571108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.571128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.571147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.571174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.571191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.315 [2024-11-17 02:28:29.571209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.316 [2024-11-17 02:28:29.571227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.316 [2024-11-17 02:28:29.571244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.316 [2024-11-17 02:28:29.571262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.316 [2024-11-17 02:28:29.571279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.316 [2024-11-17 02:28:29.571296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.316 [2024-11-17 02:28:29.571313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.316 [2024-11-17 02:28:29.571330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.316 [2024-11-17 02:28:29.571348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.316 [2024-11-17 02:28:29.571365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.316 [2024-11-17 02:28:29.571383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:21.316 [2024-11-17 02:28:29.574410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.574479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.574526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.574551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.574578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.574602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.574627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.574649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.574673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.574702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.574729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.574751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.574775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.574797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.574842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.574866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.316 [2024-11-17 02:28:29.574890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.574913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.574937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.574959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.574984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.575006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.575031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:21.316 [2024-11-17 02:28:29.575053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.575077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.575107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.575135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.575166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.575189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.316 [2024-11-17 02:28:29.575212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.575236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.575262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.575287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.575310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:21.316 [2024-11-17 02:28:29.575334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.575356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.575380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.575412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.575436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.575458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.575492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.575514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.575538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.575560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.575584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.575606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.575630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.575653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.575677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.575699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.575724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.575746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.575771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.575793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.575817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.575843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.575869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.575890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.575915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.575936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.575960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.575981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.576006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.576028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.316 [2024-11-17 02:28:29.576052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.316 [2024-11-17 02:28:29.576074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.576114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.576139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.576174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.576195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.576219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.576241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.576265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.576287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.576311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.576332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.576355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.576377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.576409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.576431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.576469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.576491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.576515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.576537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.576562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.576583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.576607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.576628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.576652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.576674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.576698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.576720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.576743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.576765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.576790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.576811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.576835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.576857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.576881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.576903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.576927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.576949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.576973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.576994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.577018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.577043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.577068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.577090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.577122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.577145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.577173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.577195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.577218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.577240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.577263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.577285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.577309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.577331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.577354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.577376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.577400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.577426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.577450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.577476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.577500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.577522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.577546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.317 [2024-11-17 02:28:29.577567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.577630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:08:21.317 [2024-11-17 02:28:29.578020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.317 [2024-11-17 02:28:29.578055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.578081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.317 [2024-11-17 02:28:29.578109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.578134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.317 [2024-11-17 02:28:29.578157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.578178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.317 [2024-11-17 02:28:29.578198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.317 [2024-11-17 02:28:29.578217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:08:21.317 [2024-11-17 02:28:29.579431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:21.317 task offset: 24576 on job bdev=Nvme0n1 fails 00:08:21.317 00:08:21.317 Latency(us) 00:08:21.317 [2024-11-17T01:28:29.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.317 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:21.317 Job: Nvme0n1 ended in about 0.17 seconds with error 00:08:21.317 Verification LBA range: start 0x0 length 0x400 00:08:21.317 Nvme0n1 : 0.17 1147.94 71.75 382.65 0.00 39148.33 4587.52 41360.50 00:08:21.317 [2024-11-17T01:28:29.777Z] =================================================================================================================== 00:08:21.317 [2024-11-17T01:28:29.778Z] Total : 1147.94 71.75 382.65 0.00 39148.33 4587.52 41360.50 00:08:21.318 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.318 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:21.318 [2024-11-17 02:28:29.584433] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:21.318 [2024-11-17 02:28:29.584501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:08:21.318 [2024-11-17 02:28:29.717305] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:22.335 02:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2852073 00:08:22.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2852073) - No such process 00:08:22.336 02:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:22.336 02:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:22.336 02:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:22.336 02:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:22.336 02:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:22.336 02:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:22.336 02:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:22.336 02:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:22.336 { 00:08:22.336 "params": { 00:08:22.336 "name": "Nvme$subsystem", 00:08:22.336 "trtype": "$TEST_TRANSPORT", 00:08:22.336 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:22.336 "adrfam": "ipv4", 00:08:22.336 "trsvcid": "$NVMF_PORT", 00:08:22.336 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:22.336 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:22.336 "hdgst": ${hdgst:-false}, 00:08:22.336 "ddgst": ${ddgst:-false} 00:08:22.336 }, 00:08:22.336 "method": "bdev_nvme_attach_controller" 00:08:22.336 } 00:08:22.336 EOF 00:08:22.336 )") 00:08:22.336 02:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:22.336 02:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:22.336 02:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:22.336 02:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:22.336 "params": { 00:08:22.336 "name": "Nvme0", 00:08:22.336 "trtype": "tcp", 00:08:22.336 "traddr": "10.0.0.2", 00:08:22.336 "adrfam": "ipv4", 00:08:22.336 "trsvcid": "4420", 00:08:22.336 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:22.336 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:22.336 "hdgst": false, 00:08:22.336 "ddgst": false 00:08:22.336 }, 00:08:22.336 "method": "bdev_nvme_attach_controller" 00:08:22.336 }' 00:08:22.336 [2024-11-17 02:28:30.671270] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:22.336 [2024-11-17 02:28:30.671419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852352 ] 00:08:22.594 [2024-11-17 02:28:30.810522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.594 [2024-11-17 02:28:30.940453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.161 Running I/O for 1 seconds... 00:08:24.353 1197.00 IOPS, 74.81 MiB/s 00:08:24.353 Latency(us) 00:08:24.353 [2024-11-17T01:28:32.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.353 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:24.353 Verification LBA range: start 0x0 length 0x400 00:08:24.353 Nvme0n1 : 1.09 1174.99 73.44 0.00 0.00 51648.51 13107.20 47380.10 00:08:24.353 [2024-11-17T01:28:32.813Z] =================================================================================================================== 00:08:24.353 [2024-11-17T01:28:32.813Z] Total : 1174.99 73.44 0.00 0.00 51648.51 13107.20 47380.10 00:08:25.288 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:25.288 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:25.288 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:25.288 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:25.288 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:25.288 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:25.288 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:25.288 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:25.288 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:25.288 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:25.288 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:25.288 rmmod nvme_tcp 00:08:25.288 rmmod nvme_fabrics 00:08:25.288 rmmod nvme_keyring 00:08:25.288 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:25.288 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:25.288 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:25.288 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2851894 ']' 00:08:25.288 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2851894 00:08:25.288 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2851894 ']' 00:08:25.288 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2851894 00:08:25.288 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:25.289 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.289 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2851894 00:08:25.289 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:25.289 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:25.289 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2851894' 00:08:25.289 killing process with pid 2851894 00:08:25.289 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2851894 00:08:25.289 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2851894 00:08:26.664 [2024-11-17 02:28:34.689867] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:26.665 02:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:26.665 02:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:26.665 02:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:26.665 02:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:26.665 02:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:26.665 02:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:26.665 02:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:26.665 02:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:26.665 02:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:26.665 02:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.665 02:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.665 02:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.569 02:28:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:28.569 02:28:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:28.569 00:08:28.569 real 0m11.974s 00:08:28.569 user 0m32.555s 00:08:28.569 sys 0m3.151s 00:08:28.569 02:28:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.569 02:28:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:28.569 ************************************ 00:08:28.569 END TEST nvmf_host_management 00:08:28.569 ************************************ 00:08:28.569 02:28:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:28.569 02:28:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:28.569 02:28:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.569 02:28:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:28.569 ************************************ 00:08:28.569 START TEST nvmf_lvol 00:08:28.569 ************************************ 00:08:28.569 02:28:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:28.569 * Looking for test storage... 00:08:28.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.569 02:28:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:28.569 02:28:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:28.569 02:28:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.569 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:28.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.570 --rc genhtml_branch_coverage=1 00:08:28.570 --rc genhtml_function_coverage=1 00:08:28.570 --rc genhtml_legend=1 00:08:28.570 --rc geninfo_all_blocks=1 00:08:28.570 --rc geninfo_unexecuted_blocks=1 00:08:28.570 00:08:28.570 ' 00:08:28.570 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:28.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.570 --rc genhtml_branch_coverage=1 00:08:28.570 --rc genhtml_function_coverage=1 00:08:28.570 --rc genhtml_legend=1 00:08:28.570 --rc geninfo_all_blocks=1 00:08:28.570 --rc geninfo_unexecuted_blocks=1 00:08:28.570 00:08:28.570 ' 00:08:28.570 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:28.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.570 --rc genhtml_branch_coverage=1 00:08:28.570 --rc genhtml_function_coverage=1 00:08:28.570 --rc genhtml_legend=1 00:08:28.570 --rc geninfo_all_blocks=1 00:08:28.570 --rc geninfo_unexecuted_blocks=1 00:08:28.570 00:08:28.570 ' 00:08:28.570 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:28.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.570 --rc genhtml_branch_coverage=1 00:08:28.570 --rc genhtml_function_coverage=1 00:08:28.570 --rc genhtml_legend=1 00:08:28.570 --rc geninfo_all_blocks=1 00:08:28.570 --rc geninfo_unexecuted_blocks=1 00:08:28.570 00:08:28.570 ' 00:08:28.570 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:28.570 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:28.570 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.570 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:28.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:28.829 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:30.733 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.733 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:30.733 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:30.733 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:30.733 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:30.733 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:30.733 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:30.733 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:30.733 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:30.733 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:30.733 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:30.733 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:30.733 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:30.733 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:30.733 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:30.733 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.733 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.733 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.733 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.733 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:30.734 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:30.734 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:30.734 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:30.734 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.734 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:30.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:08:30.992 00:08:30.992 --- 10.0.0.2 ping statistics --- 00:08:30.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.992 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:08:30.992 00:08:30.992 --- 10.0.0.1 ping statistics --- 00:08:30.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.992 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2854794 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2854794 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2854794 ']' 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.992 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.993 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.993 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:30.993 [2024-11-17 02:28:39.340518] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:30.993 [2024-11-17 02:28:39.340686] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.250 [2024-11-17 02:28:39.491180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:31.250 [2024-11-17 02:28:39.628390] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.250 [2024-11-17 02:28:39.628484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.250 [2024-11-17 02:28:39.628510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.251 [2024-11-17 02:28:39.628534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.251 [2024-11-17 02:28:39.628555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.251 [2024-11-17 02:28:39.631249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.251 [2024-11-17 02:28:39.631319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.251 [2024-11-17 02:28:39.631326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.183 02:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.183 02:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:32.183 02:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:32.183 02:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:32.183 02:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:32.183 02:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.183 02:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:32.183 [2024-11-17 02:28:40.633543] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.440 02:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:32.698 02:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:32.698 02:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:32.956 02:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:32.956 02:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:33.213 02:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:33.778 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=045cc848-0a8d-4134-9b38-85cce58a6cd5 00:08:33.778 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 045cc848-0a8d-4134-9b38-85cce58a6cd5 lvol 20 00:08:34.035 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=86b95ebf-bd2c-4757-804c-8037cd2018c1 00:08:34.035 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:34.292 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 86b95ebf-bd2c-4757-804c-8037cd2018c1 00:08:34.549 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:34.807 [2024-11-17 02:28:43.133433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.807 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:35.064 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2855278 00:08:35.064 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:35.064 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:35.999 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 86b95ebf-bd2c-4757-804c-8037cd2018c1 MY_SNAPSHOT 00:08:36.566 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3afadd22-3396-4777-a175-49979e425a6b 00:08:36.566 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 86b95ebf-bd2c-4757-804c-8037cd2018c1 30 00:08:36.823 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3afadd22-3396-4777-a175-49979e425a6b MY_CLONE 00:08:37.081 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c5bfeda3-d7f6-476f-953b-14aa3a401598 00:08:37.081 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c5bfeda3-d7f6-476f-953b-14aa3a401598 00:08:38.016 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2855278 00:08:46.125 Initializing NVMe Controllers 00:08:46.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:46.125 Controller IO queue size 128, less than required. 00:08:46.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:46.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:46.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:46.125 Initialization complete. Launching workers. 00:08:46.125 ======================================================== 00:08:46.125 Latency(us) 00:08:46.125 Device Information : IOPS MiB/s Average min max 00:08:46.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8172.10 31.92 15667.74 500.82 142488.06 00:08:46.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8069.20 31.52 15867.08 3452.25 161970.21 00:08:46.125 ======================================================== 00:08:46.125 Total : 16241.30 63.44 15766.78 500.82 161970.21 00:08:46.125 00:08:46.125 02:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:46.125 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 86b95ebf-bd2c-4757-804c-8037cd2018c1 00:08:46.125 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 045cc848-0a8d-4134-9b38-85cce58a6cd5 00:08:46.383 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:46.383 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:46.383 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:46.383 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:46.383 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:46.383 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:46.384 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:46.384 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:46.384 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:46.384 rmmod nvme_tcp 00:08:46.384 rmmod nvme_fabrics 00:08:46.642 rmmod nvme_keyring 00:08:46.642 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:46.642 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:46.642 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:46.642 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2854794 ']' 00:08:46.642 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2854794 00:08:46.642 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2854794 ']' 00:08:46.642 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2854794 00:08:46.642 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:46.642 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.642 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2854794 00:08:46.642 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.642 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.642 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2854794' 00:08:46.642 killing process with pid 2854794 00:08:46.642 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2854794 00:08:46.642 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2854794 00:08:48.016 02:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:48.016 02:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:48.016 02:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:48.016 02:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:48.016 02:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:48.016 02:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:48.016 02:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:48.016 02:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:48.016 02:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:48.016 02:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.016 02:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.016 02:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.919 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:49.919 00:08:49.919 real 0m21.475s 00:08:49.919 user 1m11.891s 00:08:49.919 sys 0m5.393s 00:08:49.919 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.919 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:49.919 ************************************ 00:08:49.919 END TEST nvmf_lvol 00:08:49.919 ************************************ 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:50.178 ************************************ 00:08:50.178 START TEST nvmf_lvs_grow 00:08:50.178 ************************************ 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:50.178 * Looking for test storage... 00:08:50.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:50.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.178 --rc genhtml_branch_coverage=1 00:08:50.178 --rc genhtml_function_coverage=1 00:08:50.178 --rc genhtml_legend=1 00:08:50.178 --rc geninfo_all_blocks=1 00:08:50.178 --rc geninfo_unexecuted_blocks=1 00:08:50.178 00:08:50.178 ' 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:50.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.178 --rc genhtml_branch_coverage=1 00:08:50.178 --rc genhtml_function_coverage=1 00:08:50.178 --rc genhtml_legend=1 00:08:50.178 --rc geninfo_all_blocks=1 00:08:50.178 --rc geninfo_unexecuted_blocks=1 00:08:50.178 00:08:50.178 ' 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:50.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.178 --rc genhtml_branch_coverage=1 00:08:50.178 --rc genhtml_function_coverage=1 00:08:50.178 --rc genhtml_legend=1 00:08:50.178 --rc geninfo_all_blocks=1 00:08:50.178 --rc geninfo_unexecuted_blocks=1 00:08:50.178 00:08:50.178 ' 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:50.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.178 --rc genhtml_branch_coverage=1 00:08:50.178 --rc genhtml_function_coverage=1 00:08:50.178 --rc genhtml_legend=1 00:08:50.178 --rc geninfo_all_blocks=1 00:08:50.178 --rc geninfo_unexecuted_blocks=1 00:08:50.178 00:08:50.178 ' 00:08:50.178 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:50.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:50.179 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:52.707 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.707 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:52.708 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:52.708 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:52.708 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:52.708 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:52.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:08:52.708 00:08:52.708 --- 10.0.0.2 ping statistics --- 00:08:52.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.708 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:08:52.708 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:52.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:08:52.708 00:08:52.708 --- 10.0.0.1 ping statistics --- 00:08:52.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.708 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:08:52.709 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.709 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:52.709 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:52.709 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.709 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:52.709 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:52.709 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.709 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:52.709 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:52.709 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:52.709 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:52.709 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:52.709 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:52.709 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2858805 00:08:52.709 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2858805 00:08:52.709 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2858805 ']' 00:08:52.709 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.709 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:52.709 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.709 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.709 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.709 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:52.709 [2024-11-17 02:29:00.887163] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:52.709 [2024-11-17 02:29:00.887330] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.709 [2024-11-17 02:29:01.034121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.968 [2024-11-17 02:29:01.167220] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.968 [2024-11-17 02:29:01.167293] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.968 [2024-11-17 02:29:01.167324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.968 [2024-11-17 02:29:01.167348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.968 [2024-11-17 02:29:01.167368] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.968 [2024-11-17 02:29:01.169012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.535 02:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.535 02:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:53.535 02:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:53.535 02:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:53.535 02:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:53.535 02:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.535 02:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:53.793 [2024-11-17 02:29:02.116945] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.793 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:53.793 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:53.793 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.793 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:53.793 ************************************ 00:08:53.793 START TEST lvs_grow_clean 00:08:53.793 ************************************ 00:08:53.793 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:53.793 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:53.793 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:53.793 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:53.793 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:53.793 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:53.793 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:53.793 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:53.793 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:53.793 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:54.051 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:54.051 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:54.618 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=475baa06-08f8-404e-9589-20b38b116d41 00:08:54.618 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 475baa06-08f8-404e-9589-20b38b116d41 00:08:54.618 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:54.618 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:54.618 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:54.618 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 475baa06-08f8-404e-9589-20b38b116d41 lvol 150 00:08:54.876 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f09674ac-7759-4c8a-86ea-f5434c96e72b 00:08:54.876 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:55.133 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:55.391 [2024-11-17 02:29:03.635303] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:55.391 [2024-11-17 02:29:03.635406] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:55.391 true 00:08:55.391 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 475baa06-08f8-404e-9589-20b38b116d41 00:08:55.391 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:55.689 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:55.689 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:55.973 02:29:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f09674ac-7759-4c8a-86ea-f5434c96e72b 00:08:56.231 02:29:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:56.488 [2024-11-17 02:29:04.795172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.488 02:29:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:56.746 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2859264 00:08:56.746 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:56.746 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:56.746 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2859264 /var/tmp/bdevperf.sock 00:08:56.746 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2859264 ']' 00:08:56.746 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:56.747 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.747 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:56.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:56.747 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.747 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:56.747 [2024-11-17 02:29:05.198000] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:56.747 [2024-11-17 02:29:05.198167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2859264 ] 00:08:57.005 [2024-11-17 02:29:05.342769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.263 [2024-11-17 02:29:05.482353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.828 02:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.828 02:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:57.828 02:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:58.393 Nvme0n1 00:08:58.393 02:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:58.651 [ 00:08:58.651 { 00:08:58.651 "name": "Nvme0n1", 00:08:58.651 "aliases": [ 00:08:58.651 "f09674ac-7759-4c8a-86ea-f5434c96e72b" 00:08:58.651 ], 00:08:58.651 "product_name": "NVMe disk", 00:08:58.651 "block_size": 4096, 00:08:58.651 "num_blocks": 38912, 00:08:58.651 "uuid": "f09674ac-7759-4c8a-86ea-f5434c96e72b", 00:08:58.651 "numa_id": 0, 00:08:58.651 "assigned_rate_limits": { 00:08:58.651 "rw_ios_per_sec": 0, 00:08:58.651 "rw_mbytes_per_sec": 0, 00:08:58.651 "r_mbytes_per_sec": 0, 00:08:58.651 "w_mbytes_per_sec": 0 00:08:58.651 }, 00:08:58.651 "claimed": false, 00:08:58.651 "zoned": false, 00:08:58.651 "supported_io_types": { 00:08:58.651 "read": true, 00:08:58.651 "write": true, 00:08:58.651 "unmap": true, 00:08:58.651 "flush": true, 00:08:58.651 "reset": true, 00:08:58.651 "nvme_admin": true, 00:08:58.651 "nvme_io": true, 00:08:58.651 "nvme_io_md": false, 00:08:58.651 "write_zeroes": true, 00:08:58.651 "zcopy": false, 00:08:58.651 "get_zone_info": false, 00:08:58.651 "zone_management": false, 00:08:58.651 "zone_append": false, 00:08:58.651 "compare": true, 00:08:58.651 "compare_and_write": true, 00:08:58.651 "abort": true, 00:08:58.651 "seek_hole": false, 00:08:58.651 "seek_data": false, 00:08:58.651 "copy": true, 00:08:58.651 "nvme_iov_md": false 00:08:58.651 }, 00:08:58.651 "memory_domains": [ 00:08:58.651 { 00:08:58.651 "dma_device_id": "system", 00:08:58.651 "dma_device_type": 1 00:08:58.651 } 00:08:58.651 ], 00:08:58.651 "driver_specific": { 00:08:58.651 "nvme": [ 00:08:58.651 { 00:08:58.651 "trid": { 00:08:58.651 "trtype": "TCP", 00:08:58.651 "adrfam": "IPv4", 00:08:58.651 "traddr": "10.0.0.2", 00:08:58.651 "trsvcid": "4420", 00:08:58.651 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:58.651 }, 00:08:58.651 "ctrlr_data": { 00:08:58.651 "cntlid": 1, 00:08:58.651 "vendor_id": "0x8086", 00:08:58.651 "model_number": "SPDK bdev Controller", 00:08:58.651 "serial_number": "SPDK0", 00:08:58.651 "firmware_revision": "25.01", 00:08:58.651 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:58.651 "oacs": { 00:08:58.651 "security": 0, 00:08:58.651 "format": 0, 00:08:58.651 "firmware": 0, 00:08:58.651 "ns_manage": 0 00:08:58.651 }, 00:08:58.651 "multi_ctrlr": true, 00:08:58.651 "ana_reporting": false 00:08:58.651 }, 00:08:58.651 "vs": { 00:08:58.651 "nvme_version": "1.3" 00:08:58.651 }, 00:08:58.651 "ns_data": { 00:08:58.651 "id": 1, 00:08:58.651 "can_share": true 00:08:58.651 } 00:08:58.651 } 00:08:58.651 ], 00:08:58.651 "mp_policy": "active_passive" 00:08:58.651 } 00:08:58.651 } 00:08:58.651 ] 00:08:58.651 02:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2859529 00:08:58.651 02:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:58.651 02:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:58.909 Running I/O for 10 seconds... 00:08:59.844 Latency(us) 00:08:59.844 [2024-11-17T01:29:08.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.844 Nvme0n1 : 1.00 10559.00 41.25 0.00 0.00 0.00 0.00 0.00 00:08:59.844 [2024-11-17T01:29:08.304Z] =================================================================================================================== 00:08:59.844 [2024-11-17T01:29:08.304Z] Total : 10559.00 41.25 0.00 0.00 0.00 0.00 0.00 00:08:59.844 00:09:00.779 02:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 475baa06-08f8-404e-9589-20b38b116d41 00:09:00.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.779 Nvme0n1 : 2.00 10677.00 41.71 0.00 0.00 0.00 0.00 0.00 00:09:00.779 [2024-11-17T01:29:09.239Z] =================================================================================================================== 00:09:00.779 [2024-11-17T01:29:09.239Z] Total : 10677.00 41.71 0.00 0.00 0.00 0.00 0.00 00:09:00.779 00:09:01.036 true 00:09:01.036 02:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 475baa06-08f8-404e-9589-20b38b116d41 00:09:01.036 02:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:01.294 02:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:01.294 02:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:01.294 02:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2859529 00:09:01.860 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.860 Nvme0n1 : 3.00 10675.00 41.70 0.00 0.00 0.00 0.00 0.00 00:09:01.860 [2024-11-17T01:29:10.320Z] =================================================================================================================== 00:09:01.860 [2024-11-17T01:29:10.320Z] Total : 10675.00 41.70 0.00 0.00 0.00 0.00 0.00 00:09:01.860 00:09:02.794 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.795 Nvme0n1 : 4.00 10705.00 41.82 0.00 0.00 0.00 0.00 0.00 00:09:02.795 [2024-11-17T01:29:11.255Z] =================================================================================================================== 00:09:02.795 [2024-11-17T01:29:11.255Z] Total : 10705.00 41.82 0.00 0.00 0.00 0.00 0.00 00:09:02.795 00:09:03.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.729 Nvme0n1 : 5.00 10773.80 42.09 0.00 0.00 0.00 0.00 0.00 00:09:03.729 [2024-11-17T01:29:12.189Z] =================================================================================================================== 00:09:03.729 [2024-11-17T01:29:12.189Z] Total : 10773.80 42.09 0.00 0.00 0.00 0.00 0.00 00:09:03.729 00:09:05.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.103 Nvme0n1 : 6.00 10809.33 42.22 0.00 0.00 0.00 0.00 0.00 00:09:05.103 [2024-11-17T01:29:13.563Z] =================================================================================================================== 00:09:05.103 [2024-11-17T01:29:13.563Z] Total : 10809.33 42.22 0.00 0.00 0.00 0.00 0.00 00:09:05.103 00:09:06.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.037 Nvme0n1 : 7.00 10825.43 42.29 0.00 0.00 0.00 0.00 0.00 00:09:06.037 [2024-11-17T01:29:14.497Z] =================================================================================================================== 00:09:06.037 [2024-11-17T01:29:14.497Z] Total : 10825.43 42.29 0.00 0.00 0.00 0.00 0.00 00:09:06.037 00:09:06.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.972 Nvme0n1 : 8.00 10837.50 42.33 0.00 0.00 0.00 0.00 0.00 00:09:06.972 [2024-11-17T01:29:15.432Z] =================================================================================================================== 00:09:06.972 [2024-11-17T01:29:15.432Z] Total : 10837.50 42.33 0.00 0.00 0.00 0.00 0.00 00:09:06.972 00:09:07.908 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.908 Nvme0n1 : 9.00 10864.78 42.44 0.00 0.00 0.00 0.00 0.00 00:09:07.908 [2024-11-17T01:29:16.368Z] =================================================================================================================== 00:09:07.908 [2024-11-17T01:29:16.368Z] Total : 10864.78 42.44 0.00 0.00 0.00 0.00 0.00 00:09:07.908 00:09:08.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.842 Nvme0n1 : 10.00 10870.50 42.46 0.00 0.00 0.00 0.00 0.00 00:09:08.842 [2024-11-17T01:29:17.302Z] =================================================================================================================== 00:09:08.842 [2024-11-17T01:29:17.302Z] Total : 10870.50 42.46 0.00 0.00 0.00 0.00 0.00 00:09:08.842 00:09:08.842 00:09:08.842 Latency(us) 00:09:08.842 [2024-11-17T01:29:17.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.842 Nvme0n1 : 10.01 10876.00 42.48 0.00 0.00 11762.36 5315.70 23010.42 00:09:08.842 [2024-11-17T01:29:17.302Z] =================================================================================================================== 00:09:08.842 [2024-11-17T01:29:17.302Z] Total : 10876.00 42.48 0.00 0.00 11762.36 5315.70 23010.42 00:09:08.842 { 00:09:08.842 "results": [ 00:09:08.842 { 00:09:08.842 "job": "Nvme0n1", 00:09:08.842 "core_mask": "0x2", 00:09:08.842 "workload": "randwrite", 00:09:08.842 "status": "finished", 00:09:08.842 "queue_depth": 128, 00:09:08.842 "io_size": 4096, 00:09:08.842 "runtime": 10.006715, 00:09:08.842 "iops": 10875.996768170175, 00:09:08.842 "mibps": 42.484362375664745, 00:09:08.842 "io_failed": 0, 00:09:08.842 "io_timeout": 0, 00:09:08.842 "avg_latency_us": 11762.364933892939, 00:09:08.842 "min_latency_us": 5315.697777777777, 00:09:08.842 "max_latency_us": 23010.417777777777 00:09:08.842 } 00:09:08.842 ], 00:09:08.842 "core_count": 1 00:09:08.842 } 00:09:08.842 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2859264 00:09:08.842 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2859264 ']' 00:09:08.842 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2859264 00:09:08.842 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:08.842 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.842 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2859264 00:09:08.842 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:08.842 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:08.842 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2859264' 00:09:08.842 killing process with pid 2859264 00:09:08.842 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2859264 00:09:08.842 Received shutdown signal, test time was about 10.000000 seconds 00:09:08.842 00:09:08.842 Latency(us) 00:09:08.842 [2024-11-17T01:29:17.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.842 [2024-11-17T01:29:17.302Z] =================================================================================================================== 00:09:08.842 [2024-11-17T01:29:17.302Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:08.842 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2859264 00:09:09.776 02:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:10.033 02:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:10.600 02:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 475baa06-08f8-404e-9589-20b38b116d41 00:09:10.600 02:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:10.859 02:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:10.859 02:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:10.859 02:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:11.116 [2024-11-17 02:29:19.434602] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:11.117 02:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 475baa06-08f8-404e-9589-20b38b116d41 00:09:11.117 02:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:11.117 02:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 475baa06-08f8-404e-9589-20b38b116d41 00:09:11.117 02:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.117 02:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.117 02:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.117 02:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.117 02:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.117 02:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.117 02:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.117 02:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:11.117 02:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 475baa06-08f8-404e-9589-20b38b116d41 00:09:11.374 request: 00:09:11.374 { 00:09:11.374 "uuid": "475baa06-08f8-404e-9589-20b38b116d41", 00:09:11.374 "method": "bdev_lvol_get_lvstores", 00:09:11.374 "req_id": 1 00:09:11.374 } 00:09:11.374 Got JSON-RPC error response 00:09:11.374 response: 00:09:11.374 { 00:09:11.374 "code": -19, 00:09:11.374 "message": "No such device" 00:09:11.374 } 00:09:11.374 02:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:11.374 02:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:11.374 02:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:11.374 02:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:11.374 02:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:11.940 aio_bdev 00:09:11.940 02:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f09674ac-7759-4c8a-86ea-f5434c96e72b 00:09:11.940 02:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=f09674ac-7759-4c8a-86ea-f5434c96e72b 00:09:11.940 02:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:11.940 02:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:11.940 02:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:11.940 02:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:11.940 02:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:12.197 02:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f09674ac-7759-4c8a-86ea-f5434c96e72b -t 2000 00:09:12.455 [ 00:09:12.455 { 00:09:12.455 "name": "f09674ac-7759-4c8a-86ea-f5434c96e72b", 00:09:12.455 "aliases": [ 00:09:12.455 "lvs/lvol" 00:09:12.455 ], 00:09:12.455 "product_name": "Logical Volume", 00:09:12.455 "block_size": 4096, 00:09:12.455 "num_blocks": 38912, 00:09:12.455 "uuid": "f09674ac-7759-4c8a-86ea-f5434c96e72b", 00:09:12.455 "assigned_rate_limits": { 00:09:12.455 "rw_ios_per_sec": 0, 00:09:12.455 "rw_mbytes_per_sec": 0, 00:09:12.455 "r_mbytes_per_sec": 0, 00:09:12.455 "w_mbytes_per_sec": 0 00:09:12.455 }, 00:09:12.455 "claimed": false, 00:09:12.455 "zoned": false, 00:09:12.455 "supported_io_types": { 00:09:12.455 "read": true, 00:09:12.455 "write": true, 00:09:12.455 "unmap": true, 00:09:12.455 "flush": false, 00:09:12.455 "reset": true, 00:09:12.455 "nvme_admin": false, 00:09:12.455 "nvme_io": false, 00:09:12.455 "nvme_io_md": false, 00:09:12.455 "write_zeroes": true, 00:09:12.455 "zcopy": false, 00:09:12.455 "get_zone_info": false, 00:09:12.455 "zone_management": false, 00:09:12.455 "zone_append": false, 00:09:12.455 "compare": false, 00:09:12.455 "compare_and_write": false, 00:09:12.455 "abort": false, 00:09:12.455 "seek_hole": true, 00:09:12.455 "seek_data": true, 00:09:12.455 "copy": false, 00:09:12.455 "nvme_iov_md": false 00:09:12.455 }, 00:09:12.455 "driver_specific": { 00:09:12.455 "lvol": { 00:09:12.455 "lvol_store_uuid": "475baa06-08f8-404e-9589-20b38b116d41", 00:09:12.455 "base_bdev": "aio_bdev", 00:09:12.455 "thin_provision": false, 00:09:12.455 "num_allocated_clusters": 38, 00:09:12.455 "snapshot": false, 00:09:12.455 "clone": false, 00:09:12.455 "esnap_clone": false 00:09:12.455 } 00:09:12.455 } 00:09:12.455 } 00:09:12.455 ] 00:09:12.455 02:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:12.455 02:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 475baa06-08f8-404e-9589-20b38b116d41 00:09:12.455 02:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:12.714 02:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:12.714 02:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 475baa06-08f8-404e-9589-20b38b116d41 00:09:12.714 02:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:12.972 02:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:12.972 02:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f09674ac-7759-4c8a-86ea-f5434c96e72b 00:09:13.231 02:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 475baa06-08f8-404e-9589-20b38b116d41 00:09:13.490 02:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:14.056 02:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:14.056 00:09:14.056 real 0m20.094s 00:09:14.056 user 0m19.939s 00:09:14.056 sys 0m1.913s 00:09:14.056 02:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.056 02:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:14.056 ************************************ 00:09:14.056 END TEST lvs_grow_clean 00:09:14.056 ************************************ 00:09:14.056 02:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:14.056 02:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:14.056 02:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.056 02:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:14.056 ************************************ 00:09:14.056 START TEST lvs_grow_dirty 00:09:14.056 ************************************ 00:09:14.056 02:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:14.056 02:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:14.056 02:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:14.056 02:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:14.056 02:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:14.056 02:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:14.056 02:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:14.056 02:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:14.056 02:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:14.056 02:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:14.315 02:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:14.315 02:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:14.573 02:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=30ae076e-4bb2-431f-942e-7b7cb8935d8e 00:09:14.573 02:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30ae076e-4bb2-431f-942e-7b7cb8935d8e 00:09:14.573 02:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:14.831 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:14.831 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:14.831 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 30ae076e-4bb2-431f-942e-7b7cb8935d8e lvol 150 00:09:15.089 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=39f5da8b-4444-47f9-8620-32b49dbf7c54 00:09:15.089 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:15.089 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:15.348 [2024-11-17 02:29:23.720934] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:15.348 [2024-11-17 02:29:23.721063] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:15.348 true 00:09:15.348 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30ae076e-4bb2-431f-942e-7b7cb8935d8e 00:09:15.348 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:15.606 02:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:15.606 02:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:16.172 02:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 39f5da8b-4444-47f9-8620-32b49dbf7c54 00:09:16.172 02:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:16.430 [2024-11-17 02:29:24.868748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.430 02:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:16.996 02:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2861714 00:09:16.996 02:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:16.996 02:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2861714 /var/tmp/bdevperf.sock 00:09:16.996 02:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2861714 ']' 00:09:16.996 02:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:16.996 02:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:16.996 02:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.996 02:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:16.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:16.996 02:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.996 02:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:16.996 [2024-11-17 02:29:25.251577] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:16.996 [2024-11-17 02:29:25.251738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2861714 ] 00:09:16.996 [2024-11-17 02:29:25.402573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.254 [2024-11-17 02:29:25.537842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.819 02:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.819 02:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:17.819 02:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:18.384 Nvme0n1 00:09:18.384 02:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:18.642 [ 00:09:18.642 { 00:09:18.642 "name": "Nvme0n1", 00:09:18.642 "aliases": [ 00:09:18.642 "39f5da8b-4444-47f9-8620-32b49dbf7c54" 00:09:18.642 ], 00:09:18.642 "product_name": "NVMe disk", 00:09:18.642 "block_size": 4096, 00:09:18.642 "num_blocks": 38912, 00:09:18.642 "uuid": "39f5da8b-4444-47f9-8620-32b49dbf7c54", 00:09:18.642 "numa_id": 0, 00:09:18.642 "assigned_rate_limits": { 00:09:18.642 "rw_ios_per_sec": 0, 00:09:18.642 "rw_mbytes_per_sec": 0, 00:09:18.642 "r_mbytes_per_sec": 0, 00:09:18.642 "w_mbytes_per_sec": 0 00:09:18.642 }, 00:09:18.642 "claimed": false, 00:09:18.642 "zoned": false, 00:09:18.642 "supported_io_types": { 00:09:18.642 "read": true, 00:09:18.642 "write": true, 00:09:18.642 "unmap": true, 00:09:18.642 "flush": true, 00:09:18.642 "reset": true, 00:09:18.642 "nvme_admin": true, 00:09:18.642 "nvme_io": true, 00:09:18.642 "nvme_io_md": false, 00:09:18.642 "write_zeroes": true, 00:09:18.642 "zcopy": false, 00:09:18.642 "get_zone_info": false, 00:09:18.642 "zone_management": false, 00:09:18.642 "zone_append": false, 00:09:18.642 "compare": true, 00:09:18.642 "compare_and_write": true, 00:09:18.642 "abort": true, 00:09:18.642 "seek_hole": false, 00:09:18.642 "seek_data": false, 00:09:18.642 "copy": true, 00:09:18.642 "nvme_iov_md": false 00:09:18.642 }, 00:09:18.642 "memory_domains": [ 00:09:18.642 { 00:09:18.642 "dma_device_id": "system", 00:09:18.642 "dma_device_type": 1 00:09:18.642 } 00:09:18.642 ], 00:09:18.642 "driver_specific": { 00:09:18.642 "nvme": [ 00:09:18.642 { 00:09:18.642 "trid": { 00:09:18.642 "trtype": "TCP", 00:09:18.642 "adrfam": "IPv4", 00:09:18.642 "traddr": "10.0.0.2", 00:09:18.642 "trsvcid": "4420", 00:09:18.642 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:18.642 }, 00:09:18.642 "ctrlr_data": { 00:09:18.642 "cntlid": 1, 00:09:18.642 "vendor_id": "0x8086", 00:09:18.642 "model_number": "SPDK bdev Controller", 00:09:18.642 "serial_number": "SPDK0", 00:09:18.642 "firmware_revision": "25.01", 00:09:18.642 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:18.642 "oacs": { 00:09:18.642 "security": 0, 00:09:18.642 "format": 0, 00:09:18.642 "firmware": 0, 00:09:18.642 "ns_manage": 0 00:09:18.642 }, 00:09:18.642 "multi_ctrlr": true, 00:09:18.642 "ana_reporting": false 00:09:18.642 }, 00:09:18.642 "vs": { 00:09:18.642 "nvme_version": "1.3" 00:09:18.642 }, 00:09:18.642 "ns_data": { 00:09:18.642 "id": 1, 00:09:18.642 "can_share": true 00:09:18.642 } 00:09:18.642 } 00:09:18.642 ], 00:09:18.642 "mp_policy": "active_passive" 00:09:18.642 } 00:09:18.642 } 00:09:18.642 ] 00:09:18.642 02:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2861983 00:09:18.642 02:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:18.642 02:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:18.642 Running I/O for 10 seconds... 00:09:20.016 Latency(us) 00:09:20.016 [2024-11-17T01:29:28.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.016 Nvme0n1 : 1.00 10480.00 40.94 0.00 0.00 0.00 0.00 0.00 00:09:20.016 [2024-11-17T01:29:28.476Z] =================================================================================================================== 00:09:20.016 [2024-11-17T01:29:28.476Z] Total : 10480.00 40.94 0.00 0.00 0.00 0.00 0.00 00:09:20.016 00:09:20.582 02:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 30ae076e-4bb2-431f-942e-7b7cb8935d8e 00:09:20.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.930 Nvme0n1 : 2.00 10574.00 41.30 0.00 0.00 0.00 0.00 0.00 00:09:20.930 [2024-11-17T01:29:29.390Z] =================================================================================================================== 00:09:20.930 [2024-11-17T01:29:29.390Z] Total : 10574.00 41.30 0.00 0.00 0.00 0.00 0.00 00:09:20.930 00:09:20.930 true 00:09:21.211 02:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30ae076e-4bb2-431f-942e-7b7cb8935d8e 00:09:21.211 02:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:21.211 02:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:21.211 02:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:21.211 02:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2861983 00:09:21.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.777 Nvme0n1 : 3.00 10575.67 41.31 0.00 0.00 0.00 0.00 0.00 00:09:21.777 [2024-11-17T01:29:30.237Z] =================================================================================================================== 00:09:21.777 [2024-11-17T01:29:30.237Z] Total : 10575.67 41.31 0.00 0.00 0.00 0.00 0.00 00:09:21.777 00:09:22.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.711 Nvme0n1 : 4.00 10614.75 41.46 0.00 0.00 0.00 0.00 0.00 00:09:22.711 [2024-11-17T01:29:31.171Z] =================================================================================================================== 00:09:22.711 [2024-11-17T01:29:31.171Z] Total : 10614.75 41.46 0.00 0.00 0.00 0.00 0.00 00:09:22.711 00:09:24.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.085 Nvme0n1 : 5.00 10663.40 41.65 0.00 0.00 0.00 0.00 0.00 00:09:24.085 [2024-11-17T01:29:32.545Z] =================================================================================================================== 00:09:24.085 [2024-11-17T01:29:32.545Z] Total : 10663.40 41.65 0.00 0.00 0.00 0.00 0.00 00:09:24.085 00:09:24.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.651 Nvme0n1 : 6.00 10706.50 41.82 0.00 0.00 0.00 0.00 0.00 00:09:24.651 [2024-11-17T01:29:33.111Z] =================================================================================================================== 00:09:24.651 [2024-11-17T01:29:33.111Z] Total : 10706.50 41.82 0.00 0.00 0.00 0.00 0.00 00:09:24.651 00:09:26.025 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.025 Nvme0n1 : 7.00 10737.29 41.94 0.00 0.00 0.00 0.00 0.00 00:09:26.025 [2024-11-17T01:29:34.485Z] =================================================================================================================== 00:09:26.025 [2024-11-17T01:29:34.485Z] Total : 10737.29 41.94 0.00 0.00 0.00 0.00 0.00 00:09:26.025 00:09:26.959 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.959 Nvme0n1 : 8.00 10756.88 42.02 0.00 0.00 0.00 0.00 0.00 00:09:26.959 [2024-11-17T01:29:35.419Z] =================================================================================================================== 00:09:26.959 [2024-11-17T01:29:35.419Z] Total : 10756.88 42.02 0.00 0.00 0.00 0.00 0.00 00:09:26.959 00:09:27.894 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.894 Nvme0n1 : 9.00 10783.11 42.12 0.00 0.00 0.00 0.00 0.00 00:09:27.894 [2024-11-17T01:29:36.354Z] =================================================================================================================== 00:09:27.894 [2024-11-17T01:29:36.354Z] Total : 10783.11 42.12 0.00 0.00 0.00 0.00 0.00 00:09:27.894 00:09:28.829 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.829 Nvme0n1 : 10.00 10797.00 42.18 0.00 0.00 0.00 0.00 0.00 00:09:28.829 [2024-11-17T01:29:37.289Z] =================================================================================================================== 00:09:28.829 [2024-11-17T01:29:37.289Z] Total : 10797.00 42.18 0.00 0.00 0.00 0.00 0.00 00:09:28.829 00:09:28.829 00:09:28.829 Latency(us) 00:09:28.829 [2024-11-17T01:29:37.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.829 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.829 Nvme0n1 : 10.01 10801.63 42.19 0.00 0.00 11842.85 4490.43 22913.33 00:09:28.829 [2024-11-17T01:29:37.289Z] =================================================================================================================== 00:09:28.829 [2024-11-17T01:29:37.289Z] Total : 10801.63 42.19 0.00 0.00 11842.85 4490.43 22913.33 00:09:28.829 { 00:09:28.829 "results": [ 00:09:28.829 { 00:09:28.829 "job": "Nvme0n1", 00:09:28.829 "core_mask": "0x2", 00:09:28.829 "workload": "randwrite", 00:09:28.829 "status": "finished", 00:09:28.829 "queue_depth": 128, 00:09:28.829 "io_size": 4096, 00:09:28.829 "runtime": 10.007564, 00:09:28.829 "iops": 10801.629647334756, 00:09:28.829 "mibps": 42.19386580990139, 00:09:28.829 "io_failed": 0, 00:09:28.829 "io_timeout": 0, 00:09:28.829 "avg_latency_us": 11842.851838558017, 00:09:28.829 "min_latency_us": 4490.42962962963, 00:09:28.829 "max_latency_us": 22913.327407407407 00:09:28.829 } 00:09:28.829 ], 00:09:28.829 "core_count": 1 00:09:28.829 } 00:09:28.829 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2861714 00:09:28.829 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2861714 ']' 00:09:28.829 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2861714 00:09:28.829 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:28.829 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.829 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2861714 00:09:28.829 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:28.829 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:28.829 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2861714' 00:09:28.829 killing process with pid 2861714 00:09:28.829 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2861714 00:09:28.829 Received shutdown signal, test time was about 10.000000 seconds 00:09:28.829 00:09:28.829 Latency(us) 00:09:28.829 [2024-11-17T01:29:37.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.829 [2024-11-17T01:29:37.289Z] =================================================================================================================== 00:09:28.829 [2024-11-17T01:29:37.289Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:28.829 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2861714 00:09:29.763 02:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:30.021 02:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:30.278 02:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30ae076e-4bb2-431f-942e-7b7cb8935d8e 00:09:30.278 02:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:30.537 02:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:30.537 02:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:30.537 02:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2858805 00:09:30.537 02:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2858805 00:09:30.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2858805 Killed "${NVMF_APP[@]}" "$@" 00:09:30.537 02:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:30.537 02:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:30.537 02:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:30.537 02:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:30.537 02:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:30.537 02:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2863326 00:09:30.537 02:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:30.537 02:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2863326 00:09:30.537 02:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2863326 ']' 00:09:30.537 02:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.537 02:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.537 02:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.537 02:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.537 02:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:30.797 [2024-11-17 02:29:39.040524] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:30.797 [2024-11-17 02:29:39.040668] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.797 [2024-11-17 02:29:39.201414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.056 [2024-11-17 02:29:39.337330] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.056 [2024-11-17 02:29:39.337416] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.056 [2024-11-17 02:29:39.337441] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.056 [2024-11-17 02:29:39.337465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.056 [2024-11-17 02:29:39.337484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.056 [2024-11-17 02:29:39.339133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.990 02:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.990 02:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:31.990 02:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:31.990 02:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:31.990 02:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:31.990 02:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.990 02:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:31.990 [2024-11-17 02:29:40.387809] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:31.990 [2024-11-17 02:29:40.388050] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:31.990 [2024-11-17 02:29:40.388142] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:31.990 02:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:31.990 02:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 39f5da8b-4444-47f9-8620-32b49dbf7c54 00:09:31.990 02:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=39f5da8b-4444-47f9-8620-32b49dbf7c54 00:09:31.990 02:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:31.990 02:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:31.990 02:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:31.990 02:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:31.990 02:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:32.248 02:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 39f5da8b-4444-47f9-8620-32b49dbf7c54 -t 2000 00:09:32.814 [ 00:09:32.814 { 00:09:32.814 "name": "39f5da8b-4444-47f9-8620-32b49dbf7c54", 00:09:32.814 "aliases": [ 00:09:32.814 "lvs/lvol" 00:09:32.814 ], 00:09:32.814 "product_name": "Logical Volume", 00:09:32.814 "block_size": 4096, 00:09:32.814 "num_blocks": 38912, 00:09:32.814 "uuid": "39f5da8b-4444-47f9-8620-32b49dbf7c54", 00:09:32.814 "assigned_rate_limits": { 00:09:32.814 "rw_ios_per_sec": 0, 00:09:32.814 "rw_mbytes_per_sec": 0, 00:09:32.814 "r_mbytes_per_sec": 0, 00:09:32.814 "w_mbytes_per_sec": 0 00:09:32.814 }, 00:09:32.814 "claimed": false, 00:09:32.814 "zoned": false, 00:09:32.814 "supported_io_types": { 00:09:32.814 "read": true, 00:09:32.814 "write": true, 00:09:32.814 "unmap": true, 00:09:32.814 "flush": false, 00:09:32.814 "reset": true, 00:09:32.814 "nvme_admin": false, 00:09:32.814 "nvme_io": false, 00:09:32.814 "nvme_io_md": false, 00:09:32.814 "write_zeroes": true, 00:09:32.814 "zcopy": false, 00:09:32.814 "get_zone_info": false, 00:09:32.814 "zone_management": false, 00:09:32.814 "zone_append": false, 00:09:32.814 "compare": false, 00:09:32.814 "compare_and_write": false, 00:09:32.814 "abort": false, 00:09:32.814 "seek_hole": true, 00:09:32.814 "seek_data": true, 00:09:32.814 "copy": false, 00:09:32.814 "nvme_iov_md": false 00:09:32.814 }, 00:09:32.814 "driver_specific": { 00:09:32.814 "lvol": { 00:09:32.814 "lvol_store_uuid": "30ae076e-4bb2-431f-942e-7b7cb8935d8e", 00:09:32.814 "base_bdev": "aio_bdev", 00:09:32.814 "thin_provision": false, 00:09:32.814 "num_allocated_clusters": 38, 00:09:32.814 "snapshot": false, 00:09:32.814 "clone": false, 00:09:32.814 "esnap_clone": false 00:09:32.814 } 00:09:32.814 } 00:09:32.814 } 00:09:32.814 ] 00:09:32.814 02:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:32.814 02:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30ae076e-4bb2-431f-942e-7b7cb8935d8e 00:09:32.814 02:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:33.072 02:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:33.072 02:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30ae076e-4bb2-431f-942e-7b7cb8935d8e 00:09:33.072 02:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:33.329 02:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:33.329 02:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:33.587 [2024-11-17 02:29:41.836956] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:33.587 02:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30ae076e-4bb2-431f-942e-7b7cb8935d8e 00:09:33.587 02:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:33.587 02:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30ae076e-4bb2-431f-942e-7b7cb8935d8e 00:09:33.587 02:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.587 02:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.587 02:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.587 02:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.587 02:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.587 02:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.587 02:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.587 02:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:33.587 02:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30ae076e-4bb2-431f-942e-7b7cb8935d8e 00:09:33.845 request: 00:09:33.845 { 00:09:33.845 "uuid": "30ae076e-4bb2-431f-942e-7b7cb8935d8e", 00:09:33.845 "method": "bdev_lvol_get_lvstores", 00:09:33.845 "req_id": 1 00:09:33.845 } 00:09:33.845 Got JSON-RPC error response 00:09:33.845 response: 00:09:33.845 { 00:09:33.845 "code": -19, 00:09:33.845 "message": "No such device" 00:09:33.845 } 00:09:33.845 02:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:33.845 02:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:33.845 02:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:33.845 02:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:33.845 02:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:34.103 aio_bdev 00:09:34.103 02:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 39f5da8b-4444-47f9-8620-32b49dbf7c54 00:09:34.103 02:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=39f5da8b-4444-47f9-8620-32b49dbf7c54 00:09:34.103 02:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.103 02:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:34.103 02:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.103 02:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.103 02:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:34.361 02:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 39f5da8b-4444-47f9-8620-32b49dbf7c54 -t 2000 00:09:34.619 [ 00:09:34.619 { 00:09:34.619 "name": "39f5da8b-4444-47f9-8620-32b49dbf7c54", 00:09:34.619 "aliases": [ 00:09:34.619 "lvs/lvol" 00:09:34.619 ], 00:09:34.619 "product_name": "Logical Volume", 00:09:34.619 "block_size": 4096, 00:09:34.619 "num_blocks": 38912, 00:09:34.619 "uuid": "39f5da8b-4444-47f9-8620-32b49dbf7c54", 00:09:34.619 "assigned_rate_limits": { 00:09:34.619 "rw_ios_per_sec": 0, 00:09:34.619 "rw_mbytes_per_sec": 0, 00:09:34.619 "r_mbytes_per_sec": 0, 00:09:34.619 "w_mbytes_per_sec": 0 00:09:34.619 }, 00:09:34.619 "claimed": false, 00:09:34.619 "zoned": false, 00:09:34.619 "supported_io_types": { 00:09:34.619 "read": true, 00:09:34.619 "write": true, 00:09:34.619 "unmap": true, 00:09:34.619 "flush": false, 00:09:34.619 "reset": true, 00:09:34.619 "nvme_admin": false, 00:09:34.619 "nvme_io": false, 00:09:34.619 "nvme_io_md": false, 00:09:34.619 "write_zeroes": true, 00:09:34.619 "zcopy": false, 00:09:34.619 "get_zone_info": false, 00:09:34.619 "zone_management": false, 00:09:34.619 "zone_append": false, 00:09:34.619 "compare": false, 00:09:34.619 "compare_and_write": false, 00:09:34.619 "abort": false, 00:09:34.619 "seek_hole": true, 00:09:34.619 "seek_data": true, 00:09:34.619 "copy": false, 00:09:34.619 "nvme_iov_md": false 00:09:34.619 }, 00:09:34.619 "driver_specific": { 00:09:34.619 "lvol": { 00:09:34.619 "lvol_store_uuid": "30ae076e-4bb2-431f-942e-7b7cb8935d8e", 00:09:34.619 "base_bdev": "aio_bdev", 00:09:34.619 "thin_provision": false, 00:09:34.619 "num_allocated_clusters": 38, 00:09:34.619 "snapshot": false, 00:09:34.619 "clone": false, 00:09:34.619 "esnap_clone": false 00:09:34.619 } 00:09:34.619 } 00:09:34.619 } 00:09:34.619 ] 00:09:34.619 02:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:34.619 02:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30ae076e-4bb2-431f-942e-7b7cb8935d8e 00:09:34.620 02:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:34.877 02:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:34.877 02:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30ae076e-4bb2-431f-942e-7b7cb8935d8e 00:09:34.877 02:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:35.135 02:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:35.135 02:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 39f5da8b-4444-47f9-8620-32b49dbf7c54 00:09:35.392 02:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 30ae076e-4bb2-431f-942e-7b7cb8935d8e 00:09:35.649 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:35.910 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:36.167 00:09:36.167 real 0m22.065s 00:09:36.167 user 0m56.313s 00:09:36.167 sys 0m4.603s 00:09:36.167 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.167 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:36.167 ************************************ 00:09:36.167 END TEST lvs_grow_dirty 00:09:36.167 ************************************ 00:09:36.167 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:36.167 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:36.167 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:36.167 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:36.167 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:36.167 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:36.167 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:36.168 nvmf_trace.0 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:36.168 rmmod nvme_tcp 00:09:36.168 rmmod nvme_fabrics 00:09:36.168 rmmod nvme_keyring 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2863326 ']' 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2863326 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2863326 ']' 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2863326 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2863326 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2863326' 00:09:36.168 killing process with pid 2863326 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2863326 00:09:36.168 02:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2863326 00:09:37.542 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:37.542 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:37.542 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:37.542 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:37.542 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:37.542 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:37.542 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:37.542 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:37.542 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:37.542 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.542 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.542 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.446 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:39.446 00:09:39.446 real 0m49.337s 00:09:39.446 user 1m24.263s 00:09:39.446 sys 0m8.664s 00:09:39.446 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.446 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:39.446 ************************************ 00:09:39.446 END TEST nvmf_lvs_grow 00:09:39.446 ************************************ 00:09:39.446 02:29:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:39.446 02:29:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:39.446 02:29:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.446 02:29:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:39.446 ************************************ 00:09:39.446 START TEST nvmf_bdev_io_wait 00:09:39.446 ************************************ 00:09:39.446 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:39.446 * Looking for test storage... 00:09:39.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:39.446 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:39.446 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:39.446 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:39.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.705 --rc genhtml_branch_coverage=1 00:09:39.705 --rc genhtml_function_coverage=1 00:09:39.705 --rc genhtml_legend=1 00:09:39.705 --rc geninfo_all_blocks=1 00:09:39.705 --rc geninfo_unexecuted_blocks=1 00:09:39.705 00:09:39.705 ' 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:39.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.705 --rc genhtml_branch_coverage=1 00:09:39.705 --rc genhtml_function_coverage=1 00:09:39.705 --rc genhtml_legend=1 00:09:39.705 --rc geninfo_all_blocks=1 00:09:39.705 --rc geninfo_unexecuted_blocks=1 00:09:39.705 00:09:39.705 ' 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:39.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.705 --rc genhtml_branch_coverage=1 00:09:39.705 --rc genhtml_function_coverage=1 00:09:39.705 --rc genhtml_legend=1 00:09:39.705 --rc geninfo_all_blocks=1 00:09:39.705 --rc geninfo_unexecuted_blocks=1 00:09:39.705 00:09:39.705 ' 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:39.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.705 --rc genhtml_branch_coverage=1 00:09:39.705 --rc genhtml_function_coverage=1 00:09:39.705 --rc genhtml_legend=1 00:09:39.705 --rc geninfo_all_blocks=1 00:09:39.705 --rc geninfo_unexecuted_blocks=1 00:09:39.705 00:09:39.705 ' 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.705 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:39.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:39.706 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:41.608 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:41.608 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:41.608 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:41.608 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:41.608 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:41.609 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:41.609 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.609 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.609 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:41.609 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:41.609 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:41.609 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:41.609 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:41.609 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:41.609 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:41.609 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.609 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:41.609 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:41.609 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:41.609 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:41.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:09:41.868 00:09:41.868 --- 10.0.0.2 ping statistics --- 00:09:41.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.868 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:41.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:09:41.868 00:09:41.868 --- 10.0.0.1 ping statistics --- 00:09:41.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.868 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2866142 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2866142 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2866142 ']' 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.868 02:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.868 [2024-11-17 02:29:50.322560] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:41.868 [2024-11-17 02:29:50.322702] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.127 [2024-11-17 02:29:50.475074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.385 [2024-11-17 02:29:50.621803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.385 [2024-11-17 02:29:50.621877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.385 [2024-11-17 02:29:50.621903] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.385 [2024-11-17 02:29:50.621927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.385 [2024-11-17 02:29:50.621946] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.385 [2024-11-17 02:29:50.624759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.385 [2024-11-17 02:29:50.624834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.385 [2024-11-17 02:29:50.627137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.385 [2024-11-17 02:29:50.627142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.951 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.951 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:42.951 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:42.951 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:42.951 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.951 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.951 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:42.951 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.951 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.951 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.951 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:42.951 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.951 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.209 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.209 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:43.209 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.209 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.209 [2024-11-17 02:29:51.537563] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.209 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.209 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:43.209 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.209 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.209 Malloc0 00:09:43.209 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.209 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:43.209 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.209 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.209 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.209 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:43.209 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.209 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.209 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.209 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.209 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.209 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.210 [2024-11-17 02:29:51.644016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2866309 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2866311 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:43.210 { 00:09:43.210 "params": { 00:09:43.210 "name": "Nvme$subsystem", 00:09:43.210 "trtype": "$TEST_TRANSPORT", 00:09:43.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:43.210 "adrfam": "ipv4", 00:09:43.210 "trsvcid": "$NVMF_PORT", 00:09:43.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:43.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:43.210 "hdgst": ${hdgst:-false}, 00:09:43.210 "ddgst": ${ddgst:-false} 00:09:43.210 }, 00:09:43.210 "method": "bdev_nvme_attach_controller" 00:09:43.210 } 00:09:43.210 EOF 00:09:43.210 )") 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2866314 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2866317 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:43.210 { 00:09:43.210 "params": { 00:09:43.210 "name": "Nvme$subsystem", 00:09:43.210 "trtype": "$TEST_TRANSPORT", 00:09:43.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:43.210 "adrfam": "ipv4", 00:09:43.210 "trsvcid": "$NVMF_PORT", 00:09:43.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:43.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:43.210 "hdgst": ${hdgst:-false}, 00:09:43.210 "ddgst": ${ddgst:-false} 00:09:43.210 }, 00:09:43.210 "method": "bdev_nvme_attach_controller" 00:09:43.210 } 00:09:43.210 EOF 00:09:43.210 )") 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:43.210 { 00:09:43.210 "params": { 00:09:43.210 "name": "Nvme$subsystem", 00:09:43.210 "trtype": "$TEST_TRANSPORT", 00:09:43.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:43.210 "adrfam": "ipv4", 00:09:43.210 "trsvcid": "$NVMF_PORT", 00:09:43.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:43.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:43.210 "hdgst": ${hdgst:-false}, 00:09:43.210 "ddgst": ${ddgst:-false} 00:09:43.210 }, 00:09:43.210 "method": "bdev_nvme_attach_controller" 00:09:43.210 } 00:09:43.210 EOF 00:09:43.210 )") 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:43.210 { 00:09:43.210 "params": { 00:09:43.210 "name": "Nvme$subsystem", 00:09:43.210 "trtype": "$TEST_TRANSPORT", 00:09:43.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:43.210 "adrfam": "ipv4", 00:09:43.210 "trsvcid": "$NVMF_PORT", 00:09:43.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:43.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:43.210 "hdgst": ${hdgst:-false}, 00:09:43.210 "ddgst": ${ddgst:-false} 00:09:43.210 }, 00:09:43.210 "method": "bdev_nvme_attach_controller" 00:09:43.210 } 00:09:43.210 EOF 00:09:43.210 )") 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2866309 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:43.210 "params": { 00:09:43.210 "name": "Nvme1", 00:09:43.210 "trtype": "tcp", 00:09:43.210 "traddr": "10.0.0.2", 00:09:43.210 "adrfam": "ipv4", 00:09:43.210 "trsvcid": "4420", 00:09:43.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:43.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:43.210 "hdgst": false, 00:09:43.210 "ddgst": false 00:09:43.210 }, 00:09:43.210 "method": "bdev_nvme_attach_controller" 00:09:43.210 }' 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:43.210 "params": { 00:09:43.210 "name": "Nvme1", 00:09:43.210 "trtype": "tcp", 00:09:43.210 "traddr": "10.0.0.2", 00:09:43.210 "adrfam": "ipv4", 00:09:43.210 "trsvcid": "4420", 00:09:43.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:43.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:43.210 "hdgst": false, 00:09:43.210 "ddgst": false 00:09:43.210 }, 00:09:43.210 "method": "bdev_nvme_attach_controller" 00:09:43.210 }' 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:43.210 "params": { 00:09:43.210 "name": "Nvme1", 00:09:43.210 "trtype": "tcp", 00:09:43.210 "traddr": "10.0.0.2", 00:09:43.210 "adrfam": "ipv4", 00:09:43.210 "trsvcid": "4420", 00:09:43.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:43.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:43.210 "hdgst": false, 00:09:43.210 "ddgst": false 00:09:43.210 }, 00:09:43.210 "method": "bdev_nvme_attach_controller" 00:09:43.210 }' 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:43.210 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:43.210 "params": { 00:09:43.210 "name": "Nvme1", 00:09:43.210 "trtype": "tcp", 00:09:43.210 "traddr": "10.0.0.2", 00:09:43.210 "adrfam": "ipv4", 00:09:43.210 "trsvcid": "4420", 00:09:43.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:43.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:43.210 "hdgst": false, 00:09:43.210 "ddgst": false 00:09:43.210 }, 00:09:43.210 "method": "bdev_nvme_attach_controller" 00:09:43.210 }' 00:09:43.469 [2024-11-17 02:29:51.735659] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:43.469 [2024-11-17 02:29:51.735659] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:43.469 [2024-11-17 02:29:51.735791] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-17 02:29:51.735794] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:43.469 --proc-type=auto ] 00:09:43.469 [2024-11-17 02:29:51.736136] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:43.469 [2024-11-17 02:29:51.736202] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:43.469 [2024-11-17 02:29:51.736269] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:43.469 [2024-11-17 02:29:51.736308] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:43.728 [2024-11-17 02:29:51.990334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.728 [2024-11-17 02:29:52.089339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.728 [2024-11-17 02:29:52.113711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:43.728 [2024-11-17 02:29:52.170307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.986 [2024-11-17 02:29:52.211431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:43.986 [2024-11-17 02:29:52.243699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.986 [2024-11-17 02:29:52.287565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:43.986 [2024-11-17 02:29:52.360555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:44.244 Running I/O for 1 seconds... 00:09:44.502 Running I/O for 1 seconds... 00:09:44.502 Running I/O for 1 seconds... 00:09:44.502 Running I/O for 1 seconds... 00:09:45.068 8138.00 IOPS, 31.79 MiB/s 00:09:45.068 Latency(us) 00:09:45.068 [2024-11-17T01:29:53.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.068 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:45.068 Nvme1n1 : 1.01 8195.33 32.01 0.00 0.00 15541.32 3907.89 23010.42 00:09:45.068 [2024-11-17T01:29:53.528Z] =================================================================================================================== 00:09:45.068 [2024-11-17T01:29:53.528Z] Total : 8195.33 32.01 0.00 0.00 15541.32 3907.89 23010.42 00:09:45.326 6314.00 IOPS, 24.66 MiB/s [2024-11-17T01:29:53.786Z] 7239.00 IOPS, 28.28 MiB/s 00:09:45.326 Latency(us) 00:09:45.326 [2024-11-17T01:29:53.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.326 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:45.326 Nvme1n1 : 1.01 6361.07 24.85 0.00 0.00 19985.61 6359.42 25826.04 00:09:45.326 [2024-11-17T01:29:53.786Z] =================================================================================================================== 00:09:45.326 [2024-11-17T01:29:53.786Z] Total : 6361.07 24.85 0.00 0.00 19985.61 6359.42 25826.04 00:09:45.326 00:09:45.326 Latency(us) 00:09:45.326 [2024-11-17T01:29:53.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.326 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:45.326 Nvme1n1 : 1.01 7308.06 28.55 0.00 0.00 17427.42 7136.14 32428.18 00:09:45.326 [2024-11-17T01:29:53.786Z] =================================================================================================================== 00:09:45.326 [2024-11-17T01:29:53.786Z] Total : 7308.06 28.55 0.00 0.00 17427.42 7136.14 32428.18 00:09:45.584 152072.00 IOPS, 594.03 MiB/s 00:09:45.584 Latency(us) 00:09:45.584 [2024-11-17T01:29:54.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.584 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:45.584 Nvme1n1 : 1.00 151765.09 592.83 0.00 0.00 839.14 362.57 2002.49 00:09:45.584 [2024-11-17T01:29:54.044Z] =================================================================================================================== 00:09:45.584 [2024-11-17T01:29:54.044Z] Total : 151765.09 592.83 0.00 0.00 839.14 362.57 2002.49 00:09:45.841 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2866311 00:09:46.099 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2866314 00:09:46.357 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2866317 00:09:46.357 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:46.357 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.357 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:46.358 rmmod nvme_tcp 00:09:46.358 rmmod nvme_fabrics 00:09:46.358 rmmod nvme_keyring 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2866142 ']' 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2866142 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2866142 ']' 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2866142 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2866142 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2866142' 00:09:46.358 killing process with pid 2866142 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2866142 00:09:46.358 02:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2866142 00:09:47.322 02:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:47.322 02:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:47.322 02:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:47.322 02:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:47.322 02:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:47.322 02:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:47.322 02:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:47.322 02:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:47.322 02:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:47.322 02:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.322 02:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.322 02:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:49.874 00:09:49.874 real 0m10.001s 00:09:49.874 user 0m28.067s 00:09:49.874 sys 0m4.213s 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:49.874 ************************************ 00:09:49.874 END TEST nvmf_bdev_io_wait 00:09:49.874 ************************************ 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:49.874 ************************************ 00:09:49.874 START TEST nvmf_queue_depth 00:09:49.874 ************************************ 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:49.874 * Looking for test storage... 00:09:49.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.874 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:49.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.875 --rc genhtml_branch_coverage=1 00:09:49.875 --rc genhtml_function_coverage=1 00:09:49.875 --rc genhtml_legend=1 00:09:49.875 --rc geninfo_all_blocks=1 00:09:49.875 --rc geninfo_unexecuted_blocks=1 00:09:49.875 00:09:49.875 ' 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:49.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.875 --rc genhtml_branch_coverage=1 00:09:49.875 --rc genhtml_function_coverage=1 00:09:49.875 --rc genhtml_legend=1 00:09:49.875 --rc geninfo_all_blocks=1 00:09:49.875 --rc geninfo_unexecuted_blocks=1 00:09:49.875 00:09:49.875 ' 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:49.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.875 --rc genhtml_branch_coverage=1 00:09:49.875 --rc genhtml_function_coverage=1 00:09:49.875 --rc genhtml_legend=1 00:09:49.875 --rc geninfo_all_blocks=1 00:09:49.875 --rc geninfo_unexecuted_blocks=1 00:09:49.875 00:09:49.875 ' 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:49.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.875 --rc genhtml_branch_coverage=1 00:09:49.875 --rc genhtml_function_coverage=1 00:09:49.875 --rc genhtml_legend=1 00:09:49.875 --rc geninfo_all_blocks=1 00:09:49.875 --rc geninfo_unexecuted_blocks=1 00:09:49.875 00:09:49.875 ' 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.875 02:29:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:49.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.875 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.876 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:49.876 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:49.876 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:49.876 02:29:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:51.777 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:51.777 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:51.777 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:51.777 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:51.777 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:51.778 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:51.778 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.778 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:51.778 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:51.778 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:51.778 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:51.778 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:51.778 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:51.778 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:51.778 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:51.778 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:51.778 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:51.778 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:51.778 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:51.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:09:51.778 00:09:51.778 --- 10.0.0.2 ping statistics --- 00:09:51.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.778 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:09:51.778 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:51.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:09:51.778 00:09:51.778 --- 10.0.0.1 ping statistics --- 00:09:51.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.778 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:09:51.778 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.778 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:51.778 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:51.778 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.778 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:51.778 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:51.778 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.778 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:51.778 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:51.778 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:51.778 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:51.778 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:51.778 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.036 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2868832 00:09:52.037 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:52.037 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2868832 00:09:52.037 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2868832 ']' 00:09:52.037 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.037 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.037 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.037 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.037 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.037 [2024-11-17 02:30:00.329138] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:52.037 [2024-11-17 02:30:00.329278] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.037 [2024-11-17 02:30:00.478990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.294 [2024-11-17 02:30:00.599929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:52.294 [2024-11-17 02:30:00.600025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:52.294 [2024-11-17 02:30:00.600045] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:52.294 [2024-11-17 02:30:00.600064] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:52.294 [2024-11-17 02:30:00.600103] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:52.294 [2024-11-17 02:30:00.601511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.860 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.860 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:52.860 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:52.860 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:52.860 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:53.118 [2024-11-17 02:30:01.331254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:53.118 Malloc0 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:53.118 [2024-11-17 02:30:01.449621] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2869060 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2869060 /var/tmp/bdevperf.sock 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2869060 ']' 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:53.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.118 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:53.118 [2024-11-17 02:30:01.535628] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:53.118 [2024-11-17 02:30:01.535779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2869060 ] 00:09:53.377 [2024-11-17 02:30:01.679671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.377 [2024-11-17 02:30:01.817049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.311 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.311 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:54.311 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:54.311 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.311 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:54.311 NVMe0n1 00:09:54.311 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.311 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:54.570 Running I/O for 10 seconds... 00:09:56.440 6068.00 IOPS, 23.70 MiB/s [2024-11-17T01:30:06.276Z] 6135.50 IOPS, 23.97 MiB/s [2024-11-17T01:30:07.211Z] 6137.33 IOPS, 23.97 MiB/s [2024-11-17T01:30:08.147Z] 6073.25 IOPS, 23.72 MiB/s [2024-11-17T01:30:09.087Z] 6042.20 IOPS, 23.60 MiB/s [2024-11-17T01:30:10.022Z] 6056.83 IOPS, 23.66 MiB/s [2024-11-17T01:30:10.957Z] 6043.00 IOPS, 23.61 MiB/s [2024-11-17T01:30:11.892Z] 6035.75 IOPS, 23.58 MiB/s [2024-11-17T01:30:13.267Z] 6061.78 IOPS, 23.68 MiB/s [2024-11-17T01:30:13.267Z] 6085.00 IOPS, 23.77 MiB/s 00:10:04.807 Latency(us) 00:10:04.807 [2024-11-17T01:30:13.267Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:04.807 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:04.807 Verification LBA range: start 0x0 length 0x4000 00:10:04.807 NVMe0n1 : 10.14 6089.48 23.79 0.00 0.00 166377.06 27379.48 100197.26 00:10:04.807 [2024-11-17T01:30:13.267Z] =================================================================================================================== 00:10:04.807 [2024-11-17T01:30:13.267Z] Total : 6089.48 23.79 0.00 0.00 166377.06 27379.48 100197.26 00:10:04.807 { 00:10:04.807 "results": [ 00:10:04.807 { 00:10:04.807 "job": "NVMe0n1", 00:10:04.807 "core_mask": "0x1", 00:10:04.807 "workload": "verify", 00:10:04.807 "status": "finished", 00:10:04.807 "verify_range": { 00:10:04.807 "start": 0, 00:10:04.807 "length": 16384 00:10:04.807 }, 00:10:04.807 "queue_depth": 1024, 00:10:04.807 "io_size": 4096, 00:10:04.807 "runtime": 10.142733, 00:10:04.807 "iops": 6089.482982545237, 00:10:04.807 "mibps": 23.787042900567332, 00:10:04.807 "io_failed": 0, 00:10:04.807 "io_timeout": 0, 00:10:04.807 "avg_latency_us": 166377.06362023184, 00:10:04.807 "min_latency_us": 27379.484444444446, 00:10:04.807 "max_latency_us": 100197.26222222223 00:10:04.807 } 00:10:04.807 ], 00:10:04.807 "core_count": 1 00:10:04.807 } 00:10:04.807 02:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2869060 00:10:04.807 02:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2869060 ']' 00:10:04.807 02:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2869060 00:10:04.807 02:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:04.807 02:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.807 02:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2869060 00:10:04.807 02:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.808 02:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.808 02:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2869060' 00:10:04.808 killing process with pid 2869060 00:10:04.808 02:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2869060 00:10:04.808 Received shutdown signal, test time was about 10.000000 seconds 00:10:04.808 00:10:04.808 Latency(us) 00:10:04.808 [2024-11-17T01:30:13.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:04.808 [2024-11-17T01:30:13.268Z] =================================================================================================================== 00:10:04.808 [2024-11-17T01:30:13.268Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:04.808 02:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2869060 00:10:05.742 02:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:05.742 02:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:05.742 02:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:05.742 02:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:05.742 02:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:05.742 02:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:05.742 02:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:05.742 02:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:05.742 rmmod nvme_tcp 00:10:05.742 rmmod nvme_fabrics 00:10:05.742 rmmod nvme_keyring 00:10:05.742 02:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:05.742 02:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:05.742 02:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:05.742 02:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2868832 ']' 00:10:05.742 02:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2868832 00:10:05.742 02:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2868832 ']' 00:10:05.742 02:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2868832 00:10:05.742 02:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:05.742 02:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.742 02:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2868832 00:10:05.742 02:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:05.742 02:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:05.742 02:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2868832' 00:10:05.742 killing process with pid 2868832 00:10:05.742 02:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2868832 00:10:05.742 02:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2868832 00:10:07.118 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:07.118 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:07.118 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:07.118 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:07.118 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:07.118 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:07.118 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:07.118 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:07.118 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:07.118 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.118 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.118 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.024 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:09.024 00:10:09.024 real 0m19.549s 00:10:09.024 user 0m27.858s 00:10:09.024 sys 0m3.262s 00:10:09.024 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.024 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.024 ************************************ 00:10:09.024 END TEST nvmf_queue_depth 00:10:09.024 ************************************ 00:10:09.024 02:30:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:09.024 02:30:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:09.024 02:30:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.024 02:30:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:09.024 ************************************ 00:10:09.024 START TEST nvmf_target_multipath 00:10:09.024 ************************************ 00:10:09.024 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:09.284 * Looking for test storage... 00:10:09.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:09.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.284 --rc genhtml_branch_coverage=1 00:10:09.284 --rc genhtml_function_coverage=1 00:10:09.284 --rc genhtml_legend=1 00:10:09.284 --rc geninfo_all_blocks=1 00:10:09.284 --rc geninfo_unexecuted_blocks=1 00:10:09.284 00:10:09.284 ' 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:09.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.284 --rc genhtml_branch_coverage=1 00:10:09.284 --rc genhtml_function_coverage=1 00:10:09.284 --rc genhtml_legend=1 00:10:09.284 --rc geninfo_all_blocks=1 00:10:09.284 --rc geninfo_unexecuted_blocks=1 00:10:09.284 00:10:09.284 ' 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:09.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.284 --rc genhtml_branch_coverage=1 00:10:09.284 --rc genhtml_function_coverage=1 00:10:09.284 --rc genhtml_legend=1 00:10:09.284 --rc geninfo_all_blocks=1 00:10:09.284 --rc geninfo_unexecuted_blocks=1 00:10:09.284 00:10:09.284 ' 00:10:09.284 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:09.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.284 --rc genhtml_branch_coverage=1 00:10:09.284 --rc genhtml_function_coverage=1 00:10:09.285 --rc genhtml_legend=1 00:10:09.285 --rc geninfo_all_blocks=1 00:10:09.285 --rc geninfo_unexecuted_blocks=1 00:10:09.285 00:10:09.285 ' 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:09.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:09.285 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:11.188 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:11.188 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:11.188 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.188 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:11.189 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:11.189 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:11.446 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:11.446 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:11.446 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:11.446 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:11.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:10:11.446 00:10:11.446 --- 10.0.0.2 ping statistics --- 00:10:11.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.446 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:10:11.446 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:11.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:10:11.446 00:10:11.446 --- 10.0.0.1 ping statistics --- 00:10:11.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.446 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:10:11.446 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.446 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:11.446 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:11.446 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.446 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:11.446 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:11.447 only one NIC for nvmf test 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:11.447 rmmod nvme_tcp 00:10:11.447 rmmod nvme_fabrics 00:10:11.447 rmmod nvme_keyring 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.447 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.348 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:13.348 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:13.348 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:13.348 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:13.348 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:13.348 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:13.348 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:13.348 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:13.348 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:13.348 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:13.348 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:13.348 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:13.348 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:13.348 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:13.348 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:13.348 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:13.348 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:13.348 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:13.348 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:13.606 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:13.606 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:13.606 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:13.606 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.606 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.606 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.606 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:13.606 00:10:13.606 real 0m4.367s 00:10:13.606 user 0m0.866s 00:10:13.606 sys 0m1.491s 00:10:13.606 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.606 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:13.606 ************************************ 00:10:13.606 END TEST nvmf_target_multipath 00:10:13.606 ************************************ 00:10:13.606 02:30:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:13.606 02:30:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:13.606 02:30:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.606 02:30:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:13.606 ************************************ 00:10:13.606 START TEST nvmf_zcopy 00:10:13.606 ************************************ 00:10:13.606 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:13.606 * Looking for test storage... 00:10:13.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.606 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:13.606 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:13.606 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:13.606 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:13.606 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.606 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.606 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.606 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.606 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.606 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.606 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.606 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.606 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.606 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.606 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.606 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:13.606 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:13.606 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.606 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.606 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:13.606 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:13.606 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:13.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.607 --rc genhtml_branch_coverage=1 00:10:13.607 --rc genhtml_function_coverage=1 00:10:13.607 --rc genhtml_legend=1 00:10:13.607 --rc geninfo_all_blocks=1 00:10:13.607 --rc geninfo_unexecuted_blocks=1 00:10:13.607 00:10:13.607 ' 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:13.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.607 --rc genhtml_branch_coverage=1 00:10:13.607 --rc genhtml_function_coverage=1 00:10:13.607 --rc genhtml_legend=1 00:10:13.607 --rc geninfo_all_blocks=1 00:10:13.607 --rc geninfo_unexecuted_blocks=1 00:10:13.607 00:10:13.607 ' 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:13.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.607 --rc genhtml_branch_coverage=1 00:10:13.607 --rc genhtml_function_coverage=1 00:10:13.607 --rc genhtml_legend=1 00:10:13.607 --rc geninfo_all_blocks=1 00:10:13.607 --rc geninfo_unexecuted_blocks=1 00:10:13.607 00:10:13.607 ' 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:13.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.607 --rc genhtml_branch_coverage=1 00:10:13.607 --rc genhtml_function_coverage=1 00:10:13.607 --rc genhtml_legend=1 00:10:13.607 --rc geninfo_all_blocks=1 00:10:13.607 --rc geninfo_unexecuted_blocks=1 00:10:13.607 00:10:13.607 ' 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:13.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:13.607 02:30:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:16.139 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:16.139 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:16.139 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:16.139 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:16.139 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:16.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:10:16.140 00:10:16.140 --- 10.0.0.2 ping statistics --- 00:10:16.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.140 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:16.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:10:16.140 00:10:16.140 --- 10.0.0.1 ping statistics --- 00:10:16.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.140 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2875056 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2875056 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2875056 ']' 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.140 02:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.140 [2024-11-17 02:30:24.344141] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:16.140 [2024-11-17 02:30:24.344265] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.140 [2024-11-17 02:30:24.492530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.399 [2024-11-17 02:30:24.634581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.399 [2024-11-17 02:30:24.634672] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.399 [2024-11-17 02:30:24.634698] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.399 [2024-11-17 02:30:24.634723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.399 [2024-11-17 02:30:24.634743] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.399 [2024-11-17 02:30:24.636435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.965 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.965 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:16.965 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:16.965 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:16.965 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.965 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.965 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:16.965 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:16.965 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.965 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.235 [2024-11-17 02:30:25.428204] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.235 [2024-11-17 02:30:25.444537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.235 malloc0 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:17.235 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:17.236 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:17.236 { 00:10:17.236 "params": { 00:10:17.236 "name": "Nvme$subsystem", 00:10:17.236 "trtype": "$TEST_TRANSPORT", 00:10:17.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:17.236 "adrfam": "ipv4", 00:10:17.236 "trsvcid": "$NVMF_PORT", 00:10:17.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:17.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:17.236 "hdgst": ${hdgst:-false}, 00:10:17.236 "ddgst": ${ddgst:-false} 00:10:17.236 }, 00:10:17.236 "method": "bdev_nvme_attach_controller" 00:10:17.236 } 00:10:17.236 EOF 00:10:17.236 )") 00:10:17.236 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:17.236 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:17.236 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:17.236 02:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:17.236 "params": { 00:10:17.236 "name": "Nvme1", 00:10:17.236 "trtype": "tcp", 00:10:17.236 "traddr": "10.0.0.2", 00:10:17.236 "adrfam": "ipv4", 00:10:17.236 "trsvcid": "4420", 00:10:17.236 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:17.236 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:17.236 "hdgst": false, 00:10:17.236 "ddgst": false 00:10:17.236 }, 00:10:17.236 "method": "bdev_nvme_attach_controller" 00:10:17.236 }' 00:10:17.236 [2024-11-17 02:30:25.593898] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:17.236 [2024-11-17 02:30:25.594031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2875212 ] 00:10:17.555 [2024-11-17 02:30:25.727844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.555 [2024-11-17 02:30:25.860059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.132 Running I/O for 10 seconds... 00:10:20.001 4176.00 IOPS, 32.62 MiB/s [2024-11-17T01:30:29.833Z] 4215.00 IOPS, 32.93 MiB/s [2024-11-17T01:30:30.769Z] 4225.67 IOPS, 33.01 MiB/s [2024-11-17T01:30:31.704Z] 4239.50 IOPS, 33.12 MiB/s [2024-11-17T01:30:32.638Z] 4248.20 IOPS, 33.19 MiB/s [2024-11-17T01:30:33.573Z] 4262.50 IOPS, 33.30 MiB/s [2024-11-17T01:30:34.508Z] 4264.00 IOPS, 33.31 MiB/s [2024-11-17T01:30:35.883Z] 4272.50 IOPS, 33.38 MiB/s [2024-11-17T01:30:36.818Z] 4279.11 IOPS, 33.43 MiB/s [2024-11-17T01:30:36.818Z] 4284.30 IOPS, 33.47 MiB/s 00:10:28.358 Latency(us) 00:10:28.358 [2024-11-17T01:30:36.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:28.358 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:28.358 Verification LBA range: start 0x0 length 0x1000 00:10:28.358 Nvme1n1 : 10.01 4285.45 33.48 0.00 0.00 29788.83 983.04 40777.96 00:10:28.358 [2024-11-17T01:30:36.818Z] =================================================================================================================== 00:10:28.358 [2024-11-17T01:30:36.818Z] Total : 4285.45 33.48 0.00 0.00 29788.83 983.04 40777.96 00:10:28.925 02:30:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2876662 00:10:28.925 02:30:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:28.925 02:30:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:28.925 02:30:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:28.925 02:30:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:28.925 02:30:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:28.925 02:30:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:28.925 02:30:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:28.925 02:30:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:28.925 { 00:10:28.925 "params": { 00:10:28.925 "name": "Nvme$subsystem", 00:10:28.925 "trtype": "$TEST_TRANSPORT", 00:10:28.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:28.925 "adrfam": "ipv4", 00:10:28.925 "trsvcid": "$NVMF_PORT", 00:10:28.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:28.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:28.925 "hdgst": ${hdgst:-false}, 00:10:28.925 "ddgst": ${ddgst:-false} 00:10:28.925 }, 00:10:28.925 "method": "bdev_nvme_attach_controller" 00:10:28.925 } 00:10:28.925 EOF 00:10:28.925 )") 00:10:28.925 02:30:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:28.925 [2024-11-17 02:30:37.363421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.925 [2024-11-17 02:30:37.363495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.925 02:30:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:28.925 02:30:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:28.925 02:30:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:28.925 "params": { 00:10:28.925 "name": "Nvme1", 00:10:28.925 "trtype": "tcp", 00:10:28.925 "traddr": "10.0.0.2", 00:10:28.925 "adrfam": "ipv4", 00:10:28.925 "trsvcid": "4420", 00:10:28.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:28.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:28.925 "hdgst": false, 00:10:28.925 "ddgst": false 00:10:28.925 }, 00:10:28.925 "method": "bdev_nvme_attach_controller" 00:10:28.925 }' 00:10:28.925 [2024-11-17 02:30:37.371302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.925 [2024-11-17 02:30:37.371344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.925 [2024-11-17 02:30:37.379291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.925 [2024-11-17 02:30:37.379321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.387374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.387414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.395391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.395424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.403378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.403430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.411422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.411457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.419432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.419465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.427443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.427476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.435478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.435511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.443487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.443521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.451533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.451566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.451726] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:29.184 [2024-11-17 02:30:37.451850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2876662 ] 00:10:29.184 [2024-11-17 02:30:37.459548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.459581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.467546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.467578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.475603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.475637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.483617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.483650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.491648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.491683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.499659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.499692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.507672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.507703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.515713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.515746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.523744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.523777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.531748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.531780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.539780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.539812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.547814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.547847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.555816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.555848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.563852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.563885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.571866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.571899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.579895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.579928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.587939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.587974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.595904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.595932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.603935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.603962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.609282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.184 [2024-11-17 02:30:37.611961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.611988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.619965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.619992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.628139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.628193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.184 [2024-11-17 02:30:37.636072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.184 [2024-11-17 02:30:37.636146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.644116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.644169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.652093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.652132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.660113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.660157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.668172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.668202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.676187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.676216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.684213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.684243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.692215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.692244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.700221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.700261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.708269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.708298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.716277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.716305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.724282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.724310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.732327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.732356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.739493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.443 [2024-11-17 02:30:37.740364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.740407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.748350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.748394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.756535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.756582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.764532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.764588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.772464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.772494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.780501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.780529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.788493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.788520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.796507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.796534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.804538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.804565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.812533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.812558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.820602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.820631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.828667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.828722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.836722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.836778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.844745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.844800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.852751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.852805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.860705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.860732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.868715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.868741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.876761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.876791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.884767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.884795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.892774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.892802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.443 [2024-11-17 02:30:37.900839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.443 [2024-11-17 02:30:37.900878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.701 [2024-11-17 02:30:37.908858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.701 [2024-11-17 02:30:37.908888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.701 [2024-11-17 02:30:37.916839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.701 [2024-11-17 02:30:37.916867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.701 [2024-11-17 02:30:37.924877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.701 [2024-11-17 02:30:37.924904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.701 [2024-11-17 02:30:37.932926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.701 [2024-11-17 02:30:37.932959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.701 [2024-11-17 02:30:37.940941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.701 [2024-11-17 02:30:37.940974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.701 [2024-11-17 02:30:37.948971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.701 [2024-11-17 02:30:37.949005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.701 [2024-11-17 02:30:37.956977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.701 [2024-11-17 02:30:37.957010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.701 [2024-11-17 02:30:37.965019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.701 [2024-11-17 02:30:37.965052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.701 [2024-11-17 02:30:37.973067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.701 [2024-11-17 02:30:37.973115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.701 [2024-11-17 02:30:37.981156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.701 [2024-11-17 02:30:37.981203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.702 [2024-11-17 02:30:37.989223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.702 [2024-11-17 02:30:37.989273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.702 [2024-11-17 02:30:37.997197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.702 [2024-11-17 02:30:37.997242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.702 [2024-11-17 02:30:38.005124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.702 [2024-11-17 02:30:38.005179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.702 [2024-11-17 02:30:38.013180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.702 [2024-11-17 02:30:38.013232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.702 [2024-11-17 02:30:38.021179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.702 [2024-11-17 02:30:38.021230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.702 [2024-11-17 02:30:38.029209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.702 [2024-11-17 02:30:38.029237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.702 [2024-11-17 02:30:38.037231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.702 [2024-11-17 02:30:38.037259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.702 [2024-11-17 02:30:38.045240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.702 [2024-11-17 02:30:38.045268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.702 [2024-11-17 02:30:38.053262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.702 [2024-11-17 02:30:38.053290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.702 [2024-11-17 02:30:38.061278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.702 [2024-11-17 02:30:38.061305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.702 [2024-11-17 02:30:38.069295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.702 [2024-11-17 02:30:38.069323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.702 [2024-11-17 02:30:38.077304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.702 [2024-11-17 02:30:38.077331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.702 [2024-11-17 02:30:38.085316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.702 [2024-11-17 02:30:38.085344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.702 [2024-11-17 02:30:38.093354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.702 [2024-11-17 02:30:38.093397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.702 [2024-11-17 02:30:38.101390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.702 [2024-11-17 02:30:38.101420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.702 [2024-11-17 02:30:38.109445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.702 [2024-11-17 02:30:38.109477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.702 [2024-11-17 02:30:38.117520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.702 [2024-11-17 02:30:38.117559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.702 [2024-11-17 02:30:38.125484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.702 [2024-11-17 02:30:38.125519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.702 [2024-11-17 02:30:38.133580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.702 [2024-11-17 02:30:38.133617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.702 [2024-11-17 02:30:38.141528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.702 [2024-11-17 02:30:38.141564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.702 [2024-11-17 02:30:38.149562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.702 [2024-11-17 02:30:38.149606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.702 [2024-11-17 02:30:38.157609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.702 [2024-11-17 02:30:38.157645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.960 [2024-11-17 02:30:38.165620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.960 [2024-11-17 02:30:38.165656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.960 [2024-11-17 02:30:38.173628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.960 [2024-11-17 02:30:38.173662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.960 [2024-11-17 02:30:38.181673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.960 [2024-11-17 02:30:38.181710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.960 [2024-11-17 02:30:38.189692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.960 [2024-11-17 02:30:38.189729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.960 [2024-11-17 02:30:38.197697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.960 [2024-11-17 02:30:38.197735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.960 [2024-11-17 02:30:38.205736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.960 [2024-11-17 02:30:38.205771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.960 [2024-11-17 02:30:38.215496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.960 [2024-11-17 02:30:38.215536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.960 [2024-11-17 02:30:38.221945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.960 [2024-11-17 02:30:38.221981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.960 [2024-11-17 02:30:38.229966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.960 [2024-11-17 02:30:38.230000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.960 Running I/O for 5 seconds... 00:10:29.960 [2024-11-17 02:30:38.247198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.960 [2024-11-17 02:30:38.247257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.960 [2024-11-17 02:30:38.262776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.960 [2024-11-17 02:30:38.262817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.960 [2024-11-17 02:30:38.278333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.960 [2024-11-17 02:30:38.278397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.960 [2024-11-17 02:30:38.293772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.960 [2024-11-17 02:30:38.293813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.960 [2024-11-17 02:30:38.308787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.960 [2024-11-17 02:30:38.308828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.960 [2024-11-17 02:30:38.324888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.960 [2024-11-17 02:30:38.324929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.960 [2024-11-17 02:30:38.340478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.960 [2024-11-17 02:30:38.340530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.960 [2024-11-17 02:30:38.355477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.960 [2024-11-17 02:30:38.355518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.960 [2024-11-17 02:30:38.370724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.960 [2024-11-17 02:30:38.370786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.960 [2024-11-17 02:30:38.385489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.960 [2024-11-17 02:30:38.385528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.960 [2024-11-17 02:30:38.400979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.960 [2024-11-17 02:30:38.401021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.960 [2024-11-17 02:30:38.415351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.960 [2024-11-17 02:30:38.415407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.219 [2024-11-17 02:30:38.430671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.219 [2024-11-17 02:30:38.430724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.219 [2024-11-17 02:30:38.445374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.219 [2024-11-17 02:30:38.445432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.219 [2024-11-17 02:30:38.460157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.219 [2024-11-17 02:30:38.460194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.219 [2024-11-17 02:30:38.475914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.219 [2024-11-17 02:30:38.475955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.219 [2024-11-17 02:30:38.491060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.219 [2024-11-17 02:30:38.491111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.219 [2024-11-17 02:30:38.506368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.219 [2024-11-17 02:30:38.506420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.219 [2024-11-17 02:30:38.521671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.219 [2024-11-17 02:30:38.521726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.219 [2024-11-17 02:30:38.537029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.219 [2024-11-17 02:30:38.537069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.219 [2024-11-17 02:30:38.552806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.219 [2024-11-17 02:30:38.552845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.219 [2024-11-17 02:30:38.565784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.219 [2024-11-17 02:30:38.565825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.219 [2024-11-17 02:30:38.580810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.219 [2024-11-17 02:30:38.580850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.219 [2024-11-17 02:30:38.595850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.219 [2024-11-17 02:30:38.595890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.219 [2024-11-17 02:30:38.610784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.219 [2024-11-17 02:30:38.610824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.219 [2024-11-17 02:30:38.625974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.219 [2024-11-17 02:30:38.626014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.219 [2024-11-17 02:30:38.641446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.219 [2024-11-17 02:30:38.641498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.219 [2024-11-17 02:30:38.656638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.219 [2024-11-17 02:30:38.656689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.219 [2024-11-17 02:30:38.671730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.219 [2024-11-17 02:30:38.671770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.477 [2024-11-17 02:30:38.687611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.477 [2024-11-17 02:30:38.687652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.477 [2024-11-17 02:30:38.703358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.477 [2024-11-17 02:30:38.703394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.477 [2024-11-17 02:30:38.718205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.477 [2024-11-17 02:30:38.718256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.477 [2024-11-17 02:30:38.733923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.477 [2024-11-17 02:30:38.733964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.477 [2024-11-17 02:30:38.749034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.477 [2024-11-17 02:30:38.749074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.477 [2024-11-17 02:30:38.764003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.477 [2024-11-17 02:30:38.764043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.477 [2024-11-17 02:30:38.779065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.477 [2024-11-17 02:30:38.779117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.477 [2024-11-17 02:30:38.794565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.477 [2024-11-17 02:30:38.794608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.477 [2024-11-17 02:30:38.807983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.477 [2024-11-17 02:30:38.808022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.477 [2024-11-17 02:30:38.822735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.477 [2024-11-17 02:30:38.822776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.477 [2024-11-17 02:30:38.837071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.477 [2024-11-17 02:30:38.837128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.477 [2024-11-17 02:30:38.852285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.477 [2024-11-17 02:30:38.852321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.477 [2024-11-17 02:30:38.867725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.477 [2024-11-17 02:30:38.867766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.477 [2024-11-17 02:30:38.879784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.477 [2024-11-17 02:30:38.879819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.477 [2024-11-17 02:30:38.892705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.477 [2024-11-17 02:30:38.892757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.477 [2024-11-17 02:30:38.907425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.477 [2024-11-17 02:30:38.907476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.477 [2024-11-17 02:30:38.922645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.477 [2024-11-17 02:30:38.922685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.736 [2024-11-17 02:30:38.938819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.736 [2024-11-17 02:30:38.938860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.736 [2024-11-17 02:30:38.953826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.736 [2024-11-17 02:30:38.953867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.736 [2024-11-17 02:30:38.969982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.736 [2024-11-17 02:30:38.970023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.736 [2024-11-17 02:30:38.985172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.736 [2024-11-17 02:30:38.985209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.736 [2024-11-17 02:30:39.000998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.736 [2024-11-17 02:30:39.001038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.736 [2024-11-17 02:30:39.016992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.736 [2024-11-17 02:30:39.017032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.736 [2024-11-17 02:30:39.033188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.736 [2024-11-17 02:30:39.033233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.736 [2024-11-17 02:30:39.048829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.736 [2024-11-17 02:30:39.048869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.736 [2024-11-17 02:30:39.064597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.736 [2024-11-17 02:30:39.064638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.736 [2024-11-17 02:30:39.080115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.736 [2024-11-17 02:30:39.080169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.736 [2024-11-17 02:30:39.095477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.736 [2024-11-17 02:30:39.095517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.736 [2024-11-17 02:30:39.111461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.736 [2024-11-17 02:30:39.111502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.736 [2024-11-17 02:30:39.127486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.736 [2024-11-17 02:30:39.127525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.736 [2024-11-17 02:30:39.139920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.736 [2024-11-17 02:30:39.139960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.736 [2024-11-17 02:30:39.153471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.736 [2024-11-17 02:30:39.153511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.736 [2024-11-17 02:30:39.168978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.736 [2024-11-17 02:30:39.169018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.736 [2024-11-17 02:30:39.184687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.736 [2024-11-17 02:30:39.184727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.994 [2024-11-17 02:30:39.200023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.994 [2024-11-17 02:30:39.200063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.995 [2024-11-17 02:30:39.216182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.995 [2024-11-17 02:30:39.216218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.995 [2024-11-17 02:30:39.231888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.995 [2024-11-17 02:30:39.231928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.995 8240.00 IOPS, 64.38 MiB/s [2024-11-17T01:30:39.455Z] [2024-11-17 02:30:39.247393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.995 [2024-11-17 02:30:39.247429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.995 [2024-11-17 02:30:39.262966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.995 [2024-11-17 02:30:39.263006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.995 [2024-11-17 02:30:39.278691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.995 [2024-11-17 02:30:39.278732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.995 [2024-11-17 02:30:39.293890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.995 [2024-11-17 02:30:39.293929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.995 [2024-11-17 02:30:39.309564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.995 [2024-11-17 02:30:39.309605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.995 [2024-11-17 02:30:39.322013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.995 [2024-11-17 02:30:39.322052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.995 [2024-11-17 02:30:39.334877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.995 [2024-11-17 02:30:39.334916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.995 [2024-11-17 02:30:39.350069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.995 [2024-11-17 02:30:39.350124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.995 [2024-11-17 02:30:39.365662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.995 [2024-11-17 02:30:39.365702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.995 [2024-11-17 02:30:39.381045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.995 [2024-11-17 02:30:39.381084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.995 [2024-11-17 02:30:39.396849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.995 [2024-11-17 02:30:39.396888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.995 [2024-11-17 02:30:39.413109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.995 [2024-11-17 02:30:39.413163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.995 [2024-11-17 02:30:39.428357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.995 [2024-11-17 02:30:39.428414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.995 [2024-11-17 02:30:39.443060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.995 [2024-11-17 02:30:39.443110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.254 [2024-11-17 02:30:39.458482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.254 [2024-11-17 02:30:39.458523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.254 [2024-11-17 02:30:39.472856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.254 [2024-11-17 02:30:39.472896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.254 [2024-11-17 02:30:39.487656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.254 [2024-11-17 02:30:39.487710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.254 [2024-11-17 02:30:39.503561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.254 [2024-11-17 02:30:39.503601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.254 [2024-11-17 02:30:39.516364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.254 [2024-11-17 02:30:39.516414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.254 [2024-11-17 02:30:39.531290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.254 [2024-11-17 02:30:39.531326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.254 [2024-11-17 02:30:39.547518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.254 [2024-11-17 02:30:39.547559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.254 [2024-11-17 02:30:39.562950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.254 [2024-11-17 02:30:39.562989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.254 [2024-11-17 02:30:39.578244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.254 [2024-11-17 02:30:39.578280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.254 [2024-11-17 02:30:39.593611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.254 [2024-11-17 02:30:39.593651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.254 [2024-11-17 02:30:39.608861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.254 [2024-11-17 02:30:39.608900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.254 [2024-11-17 02:30:39.623729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.254 [2024-11-17 02:30:39.623770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.254 [2024-11-17 02:30:39.638493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.254 [2024-11-17 02:30:39.638533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.254 [2024-11-17 02:30:39.653336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.254 [2024-11-17 02:30:39.653372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.254 [2024-11-17 02:30:39.669228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.254 [2024-11-17 02:30:39.669279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.254 [2024-11-17 02:30:39.685048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.254 [2024-11-17 02:30:39.685088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.254 [2024-11-17 02:30:39.700614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.254 [2024-11-17 02:30:39.700655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.254 [2024-11-17 02:30:39.714091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.254 [2024-11-17 02:30:39.714156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.513 [2024-11-17 02:30:39.729411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.513 [2024-11-17 02:30:39.729452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.514 [2024-11-17 02:30:39.744785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.514 [2024-11-17 02:30:39.744830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.514 [2024-11-17 02:30:39.760241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.514 [2024-11-17 02:30:39.760292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.514 [2024-11-17 02:30:39.776120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.514 [2024-11-17 02:30:39.776160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.514 [2024-11-17 02:30:39.791509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.514 [2024-11-17 02:30:39.791559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.514 [2024-11-17 02:30:39.806872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.514 [2024-11-17 02:30:39.806911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.514 [2024-11-17 02:30:39.819457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.514 [2024-11-17 02:30:39.819497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.514 [2024-11-17 02:30:39.835000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.514 [2024-11-17 02:30:39.835039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.514 [2024-11-17 02:30:39.850612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.514 [2024-11-17 02:30:39.850651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.514 [2024-11-17 02:30:39.865623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.514 [2024-11-17 02:30:39.865663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.514 [2024-11-17 02:30:39.881260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.514 [2024-11-17 02:30:39.881295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.514 [2024-11-17 02:30:39.896846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.514 [2024-11-17 02:30:39.896887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.514 [2024-11-17 02:30:39.911992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.514 [2024-11-17 02:30:39.912030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.514 [2024-11-17 02:30:39.927154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.514 [2024-11-17 02:30:39.927189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.514 [2024-11-17 02:30:39.942670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.514 [2024-11-17 02:30:39.942711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.514 [2024-11-17 02:30:39.957793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.514 [2024-11-17 02:30:39.957833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.514 [2024-11-17 02:30:39.973429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.514 [2024-11-17 02:30:39.973480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.772 [2024-11-17 02:30:39.986068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.772 [2024-11-17 02:30:39.986117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.772 [2024-11-17 02:30:40.000680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.772 [2024-11-17 02:30:40.000720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.772 [2024-11-17 02:30:40.015691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.772 [2024-11-17 02:30:40.015735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.772 [2024-11-17 02:30:40.030595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.772 [2024-11-17 02:30:40.030640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.772 [2024-11-17 02:30:40.047023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.772 [2024-11-17 02:30:40.047085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.772 [2024-11-17 02:30:40.062680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.772 [2024-11-17 02:30:40.062726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.772 [2024-11-17 02:30:40.078732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.772 [2024-11-17 02:30:40.078786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.772 [2024-11-17 02:30:40.091939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.772 [2024-11-17 02:30:40.091979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.772 [2024-11-17 02:30:40.106920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.772 [2024-11-17 02:30:40.106960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.773 [2024-11-17 02:30:40.122387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.773 [2024-11-17 02:30:40.122422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.773 [2024-11-17 02:30:40.138279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.773 [2024-11-17 02:30:40.138316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.773 [2024-11-17 02:30:40.154085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.773 [2024-11-17 02:30:40.154152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.773 [2024-11-17 02:30:40.168785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.773 [2024-11-17 02:30:40.168825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.773 [2024-11-17 02:30:40.184648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.773 [2024-11-17 02:30:40.184687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.773 [2024-11-17 02:30:40.199656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.773 [2024-11-17 02:30:40.199697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.773 [2024-11-17 02:30:40.214711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.773 [2024-11-17 02:30:40.214751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.773 [2024-11-17 02:30:40.230815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.773 [2024-11-17 02:30:40.230856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.031 8252.00 IOPS, 64.47 MiB/s [2024-11-17T01:30:40.491Z] [2024-11-17 02:30:40.246685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.031 [2024-11-17 02:30:40.246726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.031 [2024-11-17 02:30:40.261980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.031 [2024-11-17 02:30:40.262019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.031 [2024-11-17 02:30:40.277635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.031 [2024-11-17 02:30:40.277675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.031 [2024-11-17 02:30:40.293684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.031 [2024-11-17 02:30:40.293724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.031 [2024-11-17 02:30:40.309138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.031 [2024-11-17 02:30:40.309191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.031 [2024-11-17 02:30:40.324556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.031 [2024-11-17 02:30:40.324598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.031 [2024-11-17 02:30:40.340470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.031 [2024-11-17 02:30:40.340511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.031 [2024-11-17 02:30:40.356223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.031 [2024-11-17 02:30:40.356260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.031 [2024-11-17 02:30:40.371460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.031 [2024-11-17 02:30:40.371510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.031 [2024-11-17 02:30:40.385918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.031 [2024-11-17 02:30:40.385955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.031 [2024-11-17 02:30:40.400276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.031 [2024-11-17 02:30:40.400313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.031 [2024-11-17 02:30:40.414740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.031 [2024-11-17 02:30:40.414792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.031 [2024-11-17 02:30:40.429599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.031 [2024-11-17 02:30:40.429635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.031 [2024-11-17 02:30:40.443353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.031 [2024-11-17 02:30:40.443388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.031 [2024-11-17 02:30:40.457443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.031 [2024-11-17 02:30:40.457506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.031 [2024-11-17 02:30:40.471469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.031 [2024-11-17 02:30:40.471520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.031 [2024-11-17 02:30:40.485624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.031 [2024-11-17 02:30:40.485674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.290 [2024-11-17 02:30:40.499848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.290 [2024-11-17 02:30:40.499884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.290 [2024-11-17 02:30:40.514085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.290 [2024-11-17 02:30:40.514133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.290 [2024-11-17 02:30:40.528433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.290 [2024-11-17 02:30:40.528470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.290 [2024-11-17 02:30:40.543138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.290 [2024-11-17 02:30:40.543174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.290 [2024-11-17 02:30:40.557344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.290 [2024-11-17 02:30:40.557381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.290 [2024-11-17 02:30:40.571548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.290 [2024-11-17 02:30:40.571585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.290 [2024-11-17 02:30:40.585385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.290 [2024-11-17 02:30:40.585421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.290 [2024-11-17 02:30:40.599832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.290 [2024-11-17 02:30:40.599868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.290 [2024-11-17 02:30:40.613935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.291 [2024-11-17 02:30:40.613972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.291 [2024-11-17 02:30:40.627892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.291 [2024-11-17 02:30:40.627929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.291 [2024-11-17 02:30:40.641947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.291 [2024-11-17 02:30:40.641984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.291 [2024-11-17 02:30:40.655981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.291 [2024-11-17 02:30:40.656042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.291 [2024-11-17 02:30:40.670345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.291 [2024-11-17 02:30:40.670382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.291 [2024-11-17 02:30:40.685512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.291 [2024-11-17 02:30:40.685554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.291 [2024-11-17 02:30:40.701046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.291 [2024-11-17 02:30:40.701086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.291 [2024-11-17 02:30:40.715750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.291 [2024-11-17 02:30:40.715790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.291 [2024-11-17 02:30:40.730835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.291 [2024-11-17 02:30:40.730875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.291 [2024-11-17 02:30:40.746318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.291 [2024-11-17 02:30:40.746355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.549 [2024-11-17 02:30:40.761914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.549 [2024-11-17 02:30:40.761955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.549 [2024-11-17 02:30:40.776797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.549 [2024-11-17 02:30:40.776836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.549 [2024-11-17 02:30:40.792568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.549 [2024-11-17 02:30:40.792608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.549 [2024-11-17 02:30:40.807659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.549 [2024-11-17 02:30:40.807699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.549 [2024-11-17 02:30:40.823554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.549 [2024-11-17 02:30:40.823595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.549 [2024-11-17 02:30:40.839179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.549 [2024-11-17 02:30:40.839215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.549 [2024-11-17 02:30:40.855416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.549 [2024-11-17 02:30:40.855455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.549 [2024-11-17 02:30:40.871374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.549 [2024-11-17 02:30:40.871429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.549 [2024-11-17 02:30:40.886267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.549 [2024-11-17 02:30:40.886304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.549 [2024-11-17 02:30:40.901617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.549 [2024-11-17 02:30:40.901657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.549 [2024-11-17 02:30:40.917328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.549 [2024-11-17 02:30:40.917365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.549 [2024-11-17 02:30:40.932351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.549 [2024-11-17 02:30:40.932405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.549 [2024-11-17 02:30:40.947601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.549 [2024-11-17 02:30:40.947641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.549 [2024-11-17 02:30:40.963530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.549 [2024-11-17 02:30:40.963569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.549 [2024-11-17 02:30:40.978256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.549 [2024-11-17 02:30:40.978290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.550 [2024-11-17 02:30:40.993500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.550 [2024-11-17 02:30:40.993540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.808 [2024-11-17 02:30:41.009391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.808 [2024-11-17 02:30:41.009445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.808 [2024-11-17 02:30:41.025272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.808 [2024-11-17 02:30:41.025322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.808 [2024-11-17 02:30:41.040785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.808 [2024-11-17 02:30:41.040826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.808 [2024-11-17 02:30:41.056191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.808 [2024-11-17 02:30:41.056237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.808 [2024-11-17 02:30:41.071616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.808 [2024-11-17 02:30:41.071656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.808 [2024-11-17 02:30:41.087387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.808 [2024-11-17 02:30:41.087427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.808 [2024-11-17 02:30:41.102073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.808 [2024-11-17 02:30:41.102121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.808 [2024-11-17 02:30:41.117946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.808 [2024-11-17 02:30:41.117985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.808 [2024-11-17 02:30:41.133628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.808 [2024-11-17 02:30:41.133668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.808 [2024-11-17 02:30:41.149003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.808 [2024-11-17 02:30:41.149042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.808 [2024-11-17 02:30:41.164465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.808 [2024-11-17 02:30:41.164505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.808 [2024-11-17 02:30:41.179778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.808 [2024-11-17 02:30:41.179816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.808 [2024-11-17 02:30:41.194619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.808 [2024-11-17 02:30:41.194659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.808 [2024-11-17 02:30:41.210108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.808 [2024-11-17 02:30:41.210147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.808 [2024-11-17 02:30:41.224940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.808 [2024-11-17 02:30:41.224979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.808 [2024-11-17 02:30:41.240365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.808 [2024-11-17 02:30:41.240417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.808 8320.00 IOPS, 65.00 MiB/s [2024-11-17T01:30:41.268Z] [2024-11-17 02:30:41.255251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.809 [2024-11-17 02:30:41.255287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.067 [2024-11-17 02:30:41.271187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.067 [2024-11-17 02:30:41.271224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.067 [2024-11-17 02:30:41.284090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.067 [2024-11-17 02:30:41.284156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.067 [2024-11-17 02:30:41.299283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.067 [2024-11-17 02:30:41.299319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.067 [2024-11-17 02:30:41.314083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.067 [2024-11-17 02:30:41.314131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.067 [2024-11-17 02:30:41.328716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.067 [2024-11-17 02:30:41.328756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.067 [2024-11-17 02:30:41.343792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.067 [2024-11-17 02:30:41.343832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.067 [2024-11-17 02:30:41.359267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.067 [2024-11-17 02:30:41.359304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.067 [2024-11-17 02:30:41.374012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.067 [2024-11-17 02:30:41.374052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.067 [2024-11-17 02:30:41.388527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.067 [2024-11-17 02:30:41.388566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.067 [2024-11-17 02:30:41.403651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.067 [2024-11-17 02:30:41.403691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.067 [2024-11-17 02:30:41.418737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.067 [2024-11-17 02:30:41.418792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.067 [2024-11-17 02:30:41.430431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.067 [2024-11-17 02:30:41.430471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.068 [2024-11-17 02:30:41.444560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.068 [2024-11-17 02:30:41.444601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.068 [2024-11-17 02:30:41.459300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.068 [2024-11-17 02:30:41.459336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.068 [2024-11-17 02:30:41.474461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.068 [2024-11-17 02:30:41.474495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.068 [2024-11-17 02:30:41.489412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.068 [2024-11-17 02:30:41.489463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.068 [2024-11-17 02:30:41.503919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.068 [2024-11-17 02:30:41.503958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.068 [2024-11-17 02:30:41.518983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.068 [2024-11-17 02:30:41.519021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.326 [2024-11-17 02:30:41.534230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.326 [2024-11-17 02:30:41.534281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.326 [2024-11-17 02:30:41.549331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.326 [2024-11-17 02:30:41.549381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.326 [2024-11-17 02:30:41.564534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.326 [2024-11-17 02:30:41.564573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.326 [2024-11-17 02:30:41.580252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.326 [2024-11-17 02:30:41.580288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.326 [2024-11-17 02:30:41.594936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.326 [2024-11-17 02:30:41.594975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.326 [2024-11-17 02:30:41.609971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.326 [2024-11-17 02:30:41.610010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.326 [2024-11-17 02:30:41.624558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.326 [2024-11-17 02:30:41.624597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.326 [2024-11-17 02:30:41.639690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.326 [2024-11-17 02:30:41.639729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.326 [2024-11-17 02:30:41.654816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.326 [2024-11-17 02:30:41.654856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.326 [2024-11-17 02:30:41.670105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.326 [2024-11-17 02:30:41.670144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.326 [2024-11-17 02:30:41.684777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.326 [2024-11-17 02:30:41.684817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.326 [2024-11-17 02:30:41.700461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.326 [2024-11-17 02:30:41.700502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.326 [2024-11-17 02:30:41.715612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.326 [2024-11-17 02:30:41.715653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.326 [2024-11-17 02:30:41.730756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.326 [2024-11-17 02:30:41.730796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.326 [2024-11-17 02:30:41.745957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.326 [2024-11-17 02:30:41.745997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.327 [2024-11-17 02:30:41.760663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.327 [2024-11-17 02:30:41.760703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.327 [2024-11-17 02:30:41.775674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.327 [2024-11-17 02:30:41.775725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.585 [2024-11-17 02:30:41.791757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.585 [2024-11-17 02:30:41.791797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.585 [2024-11-17 02:30:41.806919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.585 [2024-11-17 02:30:41.806959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.585 [2024-11-17 02:30:41.822417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.585 [2024-11-17 02:30:41.822458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.585 [2024-11-17 02:30:41.837749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.585 [2024-11-17 02:30:41.837789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.585 [2024-11-17 02:30:41.853225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.585 [2024-11-17 02:30:41.853261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.585 [2024-11-17 02:30:41.869050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.585 [2024-11-17 02:30:41.869090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.585 [2024-11-17 02:30:41.884668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.585 [2024-11-17 02:30:41.884707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.585 [2024-11-17 02:30:41.900134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.585 [2024-11-17 02:30:41.900185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.585 [2024-11-17 02:30:41.915591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.585 [2024-11-17 02:30:41.915632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.585 [2024-11-17 02:30:41.930942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.585 [2024-11-17 02:30:41.930983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.585 [2024-11-17 02:30:41.944635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.585 [2024-11-17 02:30:41.944675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.585 [2024-11-17 02:30:41.959746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.585 [2024-11-17 02:30:41.959786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.585 [2024-11-17 02:30:41.975087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.585 [2024-11-17 02:30:41.975157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.585 [2024-11-17 02:30:41.989874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.586 [2024-11-17 02:30:41.989913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.586 [2024-11-17 02:30:42.004501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.586 [2024-11-17 02:30:42.004541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.586 [2024-11-17 02:30:42.019528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.586 [2024-11-17 02:30:42.019568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.586 [2024-11-17 02:30:42.034813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.586 [2024-11-17 02:30:42.034852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.844 [2024-11-17 02:30:42.051028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.844 [2024-11-17 02:30:42.051068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.844 [2024-11-17 02:30:42.067021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.844 [2024-11-17 02:30:42.067073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.844 [2024-11-17 02:30:42.083170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.844 [2024-11-17 02:30:42.083220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.844 [2024-11-17 02:30:42.098340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.844 [2024-11-17 02:30:42.098392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.844 [2024-11-17 02:30:42.113864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.844 [2024-11-17 02:30:42.113903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.844 [2024-11-17 02:30:42.129963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.844 [2024-11-17 02:30:42.130003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.844 [2024-11-17 02:30:42.146188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.844 [2024-11-17 02:30:42.146224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.844 [2024-11-17 02:30:42.161861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.844 [2024-11-17 02:30:42.161900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.844 [2024-11-17 02:30:42.177377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.844 [2024-11-17 02:30:42.177428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.844 [2024-11-17 02:30:42.192625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.844 [2024-11-17 02:30:42.192664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.844 [2024-11-17 02:30:42.206458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.844 [2024-11-17 02:30:42.206498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.844 [2024-11-17 02:30:42.221707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.844 [2024-11-17 02:30:42.221747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.844 [2024-11-17 02:30:42.236767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.844 [2024-11-17 02:30:42.236806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.844 8319.75 IOPS, 65.00 MiB/s [2024-11-17T01:30:42.305Z] [2024-11-17 02:30:42.251992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.845 [2024-11-17 02:30:42.252030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.845 [2024-11-17 02:30:42.267174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.845 [2024-11-17 02:30:42.267210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.845 [2024-11-17 02:30:42.282669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.845 [2024-11-17 02:30:42.282709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.845 [2024-11-17 02:30:42.298139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.845 [2024-11-17 02:30:42.298174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.103 [2024-11-17 02:30:42.314016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.103 [2024-11-17 02:30:42.314056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.103 [2024-11-17 02:30:42.329592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.103 [2024-11-17 02:30:42.329633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.103 [2024-11-17 02:30:42.344802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.103 [2024-11-17 02:30:42.344842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.103 [2024-11-17 02:30:42.359763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.103 [2024-11-17 02:30:42.359803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.103 [2024-11-17 02:30:42.374935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.103 [2024-11-17 02:30:42.374974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.103 [2024-11-17 02:30:42.389841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.103 [2024-11-17 02:30:42.389895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.103 [2024-11-17 02:30:42.405286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.103 [2024-11-17 02:30:42.405322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.103 [2024-11-17 02:30:42.418353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.103 [2024-11-17 02:30:42.418410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.103 [2024-11-17 02:30:42.434582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.103 [2024-11-17 02:30:42.434621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.103 [2024-11-17 02:30:42.450140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.103 [2024-11-17 02:30:42.450176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.103 [2024-11-17 02:30:42.465251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.103 [2024-11-17 02:30:42.465287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.103 [2024-11-17 02:30:42.480297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.103 [2024-11-17 02:30:42.480331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.103 [2024-11-17 02:30:42.495517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.103 [2024-11-17 02:30:42.495557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.103 [2024-11-17 02:30:42.511042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.103 [2024-11-17 02:30:42.511082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.103 [2024-11-17 02:30:42.526787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.104 [2024-11-17 02:30:42.526826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.104 [2024-11-17 02:30:42.542461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.104 [2024-11-17 02:30:42.542502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.104 [2024-11-17 02:30:42.557625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.104 [2024-11-17 02:30:42.557665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.362 [2024-11-17 02:30:42.574255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.362 [2024-11-17 02:30:42.574293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.362 [2024-11-17 02:30:42.589563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.362 [2024-11-17 02:30:42.589603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.362 [2024-11-17 02:30:42.604458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.362 [2024-11-17 02:30:42.604498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.362 [2024-11-17 02:30:42.620293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.362 [2024-11-17 02:30:42.620329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.362 [2024-11-17 02:30:42.635008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.362 [2024-11-17 02:30:42.635047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.362 [2024-11-17 02:30:42.650429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.362 [2024-11-17 02:30:42.650469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.362 [2024-11-17 02:30:42.663663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.362 [2024-11-17 02:30:42.663702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.362 [2024-11-17 02:30:42.678655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.362 [2024-11-17 02:30:42.678694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.362 [2024-11-17 02:30:42.693861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.362 [2024-11-17 02:30:42.693901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.362 [2024-11-17 02:30:42.709083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.362 [2024-11-17 02:30:42.709147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.362 [2024-11-17 02:30:42.724488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.362 [2024-11-17 02:30:42.724528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.362 [2024-11-17 02:30:42.739722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.362 [2024-11-17 02:30:42.739761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.362 [2024-11-17 02:30:42.754666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.362 [2024-11-17 02:30:42.754706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.362 [2024-11-17 02:30:42.769295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.362 [2024-11-17 02:30:42.769330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.362 [2024-11-17 02:30:42.783901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.362 [2024-11-17 02:30:42.783940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.362 [2024-11-17 02:30:42.798820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.362 [2024-11-17 02:30:42.798860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.362 [2024-11-17 02:30:42.814005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.362 [2024-11-17 02:30:42.814045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.621 [2024-11-17 02:30:42.830170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.621 [2024-11-17 02:30:42.830207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.621 [2024-11-17 02:30:42.845456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.621 [2024-11-17 02:30:42.845497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.621 [2024-11-17 02:30:42.861013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.621 [2024-11-17 02:30:42.861052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.621 [2024-11-17 02:30:42.876194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.621 [2024-11-17 02:30:42.876246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.621 [2024-11-17 02:30:42.891579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.621 [2024-11-17 02:30:42.891630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.621 [2024-11-17 02:30:42.907400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.621 [2024-11-17 02:30:42.907451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.622 [2024-11-17 02:30:42.922767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.622 [2024-11-17 02:30:42.922807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.622 [2024-11-17 02:30:42.937714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.622 [2024-11-17 02:30:42.937754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.622 [2024-11-17 02:30:42.953057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.622 [2024-11-17 02:30:42.953107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.622 [2024-11-17 02:30:42.969089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.622 [2024-11-17 02:30:42.969139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.622 [2024-11-17 02:30:42.984289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.622 [2024-11-17 02:30:42.984339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.622 [2024-11-17 02:30:42.998938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.622 [2024-11-17 02:30:42.998979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.622 [2024-11-17 02:30:43.014468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.622 [2024-11-17 02:30:43.014508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.622 [2024-11-17 02:30:43.028208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.622 [2024-11-17 02:30:43.028260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.622 [2024-11-17 02:30:43.043173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.622 [2024-11-17 02:30:43.043209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.622 [2024-11-17 02:30:43.057405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.622 [2024-11-17 02:30:43.057445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.622 [2024-11-17 02:30:43.073001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.622 [2024-11-17 02:30:43.073040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 [2024-11-17 02:30:43.086121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.086174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 [2024-11-17 02:30:43.101515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.101556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 [2024-11-17 02:30:43.117222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.117258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 [2024-11-17 02:30:43.132714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.132754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 [2024-11-17 02:30:43.148012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.148051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 [2024-11-17 02:30:43.163106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.163160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 [2024-11-17 02:30:43.178247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.178284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 [2024-11-17 02:30:43.193942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.193981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 [2024-11-17 02:30:43.206449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.206500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 [2024-11-17 02:30:43.221463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.221503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 [2024-11-17 02:30:43.236514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.236554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 8317.40 IOPS, 64.98 MiB/s [2024-11-17T01:30:43.341Z] [2024-11-17 02:30:43.251422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.251462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 [2024-11-17 02:30:43.257952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.257989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 00:10:34.881 Latency(us) 00:10:34.881 [2024-11-17T01:30:43.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:34.881 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:34.881 Nvme1n1 : 5.01 8319.80 65.00 0.00 0.00 15358.46 4951.61 24855.13 00:10:34.881 [2024-11-17T01:30:43.341Z] =================================================================================================================== 00:10:34.881 [2024-11-17T01:30:43.341Z] Total : 8319.80 65.00 0.00 0.00 15358.46 4951.61 24855.13 00:10:34.881 [2024-11-17 02:30:43.265956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.265993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 [2024-11-17 02:30:43.273989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.274026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 [2024-11-17 02:30:43.282005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.282039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 [2024-11-17 02:30:43.290043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.290078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 [2024-11-17 02:30:43.298057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.298092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 [2024-11-17 02:30:43.306080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.306134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 [2024-11-17 02:30:43.314235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.314302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 [2024-11-17 02:30:43.322303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.322378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 [2024-11-17 02:30:43.330139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.330184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.881 [2024-11-17 02:30:43.338195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.881 [2024-11-17 02:30:43.338226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.346216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.346248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.354215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.354253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.362246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.362275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.370261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.370290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.378261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.378289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.386309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.386337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.394297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.394325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.402494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.402555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.410497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.410568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.418514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.418585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.426426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.426474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.434449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.434482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.442460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.442493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.450506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.450540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.458523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.458556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.466554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.466586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.474584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.474617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.482617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.482650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.490623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.490655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.498648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.498681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.506652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.506693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.514715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.514749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.522695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.522727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.530735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.530767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.538760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.538793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.546808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.546841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.554802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.554835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.562827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.562862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.570992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.571064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.579015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.579080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.586880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.586913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.141 [2024-11-17 02:30:43.594929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.141 [2024-11-17 02:30:43.594961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.400 [2024-11-17 02:30:43.602960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.400 [2024-11-17 02:30:43.602997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.400 [2024-11-17 02:30:43.610954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.400 [2024-11-17 02:30:43.610989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.400 [2024-11-17 02:30:43.619024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.400 [2024-11-17 02:30:43.619059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.400 [2024-11-17 02:30:43.627035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.400 [2024-11-17 02:30:43.627075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.400 [2024-11-17 02:30:43.635143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.400 [2024-11-17 02:30:43.635217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.400 [2024-11-17 02:30:43.643242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.400 [2024-11-17 02:30:43.643320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.400 [2024-11-17 02:30:43.655353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.400 [2024-11-17 02:30:43.655462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.400 [2024-11-17 02:30:43.663130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.400 [2024-11-17 02:30:43.663176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.400 [2024-11-17 02:30:43.671155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.400 [2024-11-17 02:30:43.671183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.400 [2024-11-17 02:30:43.679184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.400 [2024-11-17 02:30:43.679212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.400 [2024-11-17 02:30:43.687198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.400 [2024-11-17 02:30:43.687226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.400 [2024-11-17 02:30:43.695219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.401 [2024-11-17 02:30:43.695246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.401 [2024-11-17 02:30:43.703219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.401 [2024-11-17 02:30:43.703246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.401 [2024-11-17 02:30:43.711264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.401 [2024-11-17 02:30:43.711292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.401 [2024-11-17 02:30:43.719270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.401 [2024-11-17 02:30:43.719298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.401 [2024-11-17 02:30:43.727299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.401 [2024-11-17 02:30:43.727327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.401 [2024-11-17 02:30:43.735347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.401 [2024-11-17 02:30:43.735395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.401 [2024-11-17 02:30:43.743323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.401 [2024-11-17 02:30:43.743349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.401 [2024-11-17 02:30:43.751366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.401 [2024-11-17 02:30:43.751413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.401 [2024-11-17 02:30:43.759398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.401 [2024-11-17 02:30:43.759425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.401 [2024-11-17 02:30:43.767420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.401 [2024-11-17 02:30:43.767464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.401 [2024-11-17 02:30:43.775448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.401 [2024-11-17 02:30:43.775482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.401 [2024-11-17 02:30:43.783459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.401 [2024-11-17 02:30:43.783493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.401 [2024-11-17 02:30:43.791504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.401 [2024-11-17 02:30:43.791537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.401 [2024-11-17 02:30:43.799529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.401 [2024-11-17 02:30:43.799562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.401 [2024-11-17 02:30:43.807555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.401 [2024-11-17 02:30:43.807594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.401 [2024-11-17 02:30:43.815727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.401 [2024-11-17 02:30:43.815801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.401 [2024-11-17 02:30:43.823634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.401 [2024-11-17 02:30:43.823686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.401 [2024-11-17 02:30:43.831615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.401 [2024-11-17 02:30:43.831648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.401 [2024-11-17 02:30:43.839656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.401 [2024-11-17 02:30:43.839689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.401 [2024-11-17 02:30:43.847650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.401 [2024-11-17 02:30:43.847682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.401 [2024-11-17 02:30:43.855724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.401 [2024-11-17 02:30:43.855761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:43.863739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:43.863780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:43.871759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:43.871795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:43.879768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:43.879802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:43.887789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:43.887823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:43.895792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:43.895824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:43.903840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:43.903874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:43.911842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:43.911875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:43.919893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:43.919928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:43.927903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:43.927935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:43.935995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:43.936058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:43.943998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:43.944035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:43.951980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:43.952013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:43.959975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:43.960008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:43.968045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:43.968079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:43.976026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:43.976060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:43.984064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:43.984107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:43.992094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:43.992153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:44.000121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:44.000169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:44.008161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:44.008189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:44.016182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:44.016210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:44.024169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:44.024200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:44.032374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:44.032439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:44.040230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:44.040258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:44.048259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:44.048288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:44.056278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:44.056305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:44.064311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:44.064340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:44.072323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.660 [2024-11-17 02:30:44.072351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.660 [2024-11-17 02:30:44.080351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.661 [2024-11-17 02:30:44.080398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.661 [2024-11-17 02:30:44.088366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.661 [2024-11-17 02:30:44.088415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.661 [2024-11-17 02:30:44.096422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.661 [2024-11-17 02:30:44.096463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.661 [2024-11-17 02:30:44.104422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.661 [2024-11-17 02:30:44.104455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.661 [2024-11-17 02:30:44.112467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.661 [2024-11-17 02:30:44.112508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.919 [2024-11-17 02:30:44.120522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.919 [2024-11-17 02:30:44.120560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.919 [2024-11-17 02:30:44.128529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.919 [2024-11-17 02:30:44.128571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.919 [2024-11-17 02:30:44.136568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.919 [2024-11-17 02:30:44.136602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.919 [2024-11-17 02:30:44.144614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.919 [2024-11-17 02:30:44.144648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.919 [2024-11-17 02:30:44.152595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.919 [2024-11-17 02:30:44.152629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2876662) - No such process 00:10:35.919 02:30:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2876662 00:10:35.919 02:30:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.919 02:30:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.919 02:30:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.919 02:30:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.919 02:30:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:35.919 02:30:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.919 02:30:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.919 delay0 00:10:35.919 02:30:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.919 02:30:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:35.919 02:30:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.919 02:30:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.919 02:30:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.919 02:30:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:35.919 [2024-11-17 02:30:44.291579] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:42.475 Initializing NVMe Controllers 00:10:42.475 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:42.475 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:42.475 Initialization complete. Launching workers. 00:10:42.475 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 135 00:10:42.475 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 422, failed to submit 33 00:10:42.475 success 268, unsuccessful 154, failed 0 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:42.475 rmmod nvme_tcp 00:10:42.475 rmmod nvme_fabrics 00:10:42.475 rmmod nvme_keyring 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2875056 ']' 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2875056 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2875056 ']' 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2875056 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2875056 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2875056' 00:10:42.475 killing process with pid 2875056 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2875056 00:10:42.475 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2875056 00:10:43.851 02:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:43.851 02:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:43.851 02:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:43.851 02:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:43.851 02:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:43.851 02:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:43.851 02:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:43.851 02:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:43.851 02:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:43.851 02:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.851 02:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.851 02:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.756 02:30:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:45.756 00:10:45.756 real 0m32.061s 00:10:45.756 user 0m48.310s 00:10:45.756 sys 0m7.886s 00:10:45.756 02:30:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.756 02:30:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.756 ************************************ 00:10:45.756 END TEST nvmf_zcopy 00:10:45.756 ************************************ 00:10:45.756 02:30:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:45.756 02:30:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:45.756 02:30:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.756 02:30:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:45.756 ************************************ 00:10:45.756 START TEST nvmf_nmic 00:10:45.756 ************************************ 00:10:45.756 02:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:45.756 * Looking for test storage... 00:10:45.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:45.756 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:45.756 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:45.756 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:45.756 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:45.756 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.756 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.756 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.756 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.756 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.756 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.756 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:45.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.757 --rc genhtml_branch_coverage=1 00:10:45.757 --rc genhtml_function_coverage=1 00:10:45.757 --rc genhtml_legend=1 00:10:45.757 --rc geninfo_all_blocks=1 00:10:45.757 --rc geninfo_unexecuted_blocks=1 00:10:45.757 00:10:45.757 ' 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:45.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.757 --rc genhtml_branch_coverage=1 00:10:45.757 --rc genhtml_function_coverage=1 00:10:45.757 --rc genhtml_legend=1 00:10:45.757 --rc geninfo_all_blocks=1 00:10:45.757 --rc geninfo_unexecuted_blocks=1 00:10:45.757 00:10:45.757 ' 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:45.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.757 --rc genhtml_branch_coverage=1 00:10:45.757 --rc genhtml_function_coverage=1 00:10:45.757 --rc genhtml_legend=1 00:10:45.757 --rc geninfo_all_blocks=1 00:10:45.757 --rc geninfo_unexecuted_blocks=1 00:10:45.757 00:10:45.757 ' 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:45.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.757 --rc genhtml_branch_coverage=1 00:10:45.757 --rc genhtml_function_coverage=1 00:10:45.757 --rc genhtml_legend=1 00:10:45.757 --rc geninfo_all_blocks=1 00:10:45.757 --rc geninfo_unexecuted_blocks=1 00:10:45.757 00:10:45.757 ' 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.757 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:45.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:45.758 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:45.758 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:45.758 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:45.758 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:45.758 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:45.758 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:45.758 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:45.758 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:45.758 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:45.758 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:45.758 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:45.758 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.758 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.758 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.758 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:45.758 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:45.758 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:45.758 02:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:48.346 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:48.346 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:48.346 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:48.346 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:48.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:10:48.346 00:10:48.346 --- 10.0.0.2 ping statistics --- 00:10:48.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.346 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:10:48.346 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:10:48.346 00:10:48.346 --- 10.0.0.1 ping statistics --- 00:10:48.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.347 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:10:48.347 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.347 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:48.347 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:48.347 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.347 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:48.347 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:48.347 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.347 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:48.347 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:48.347 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:48.347 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:48.347 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:48.347 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:48.347 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2880210 00:10:48.347 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:48.347 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2880210 00:10:48.347 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2880210 ']' 00:10:48.347 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.347 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.347 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.347 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.347 02:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:48.347 [2024-11-17 02:30:56.427279] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:48.347 [2024-11-17 02:30:56.427428] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.347 [2024-11-17 02:30:56.572852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:48.347 [2024-11-17 02:30:56.715436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.347 [2024-11-17 02:30:56.715534] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.347 [2024-11-17 02:30:56.715559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.347 [2024-11-17 02:30:56.715582] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.347 [2024-11-17 02:30:56.715601] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.347 [2024-11-17 02:30:56.718496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.347 [2024-11-17 02:30:56.718554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.347 [2024-11-17 02:30:56.718603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.347 [2024-11-17 02:30:56.718609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.281 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.281 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:49.281 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:49.281 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:49.281 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.281 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.281 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:49.281 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.281 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.281 [2024-11-17 02:30:57.458915] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.281 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.281 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:49.281 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.281 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.281 Malloc0 00:10:49.281 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.281 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:49.281 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.281 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.281 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.281 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:49.281 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.281 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.282 [2024-11-17 02:30:57.571634] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:49.282 test case1: single bdev can't be used in multiple subsystems 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.282 [2024-11-17 02:30:57.595302] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:49.282 [2024-11-17 02:30:57.595343] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:49.282 [2024-11-17 02:30:57.595371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.282 request: 00:10:49.282 { 00:10:49.282 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:49.282 "namespace": { 00:10:49.282 "bdev_name": "Malloc0", 00:10:49.282 "no_auto_visible": false 00:10:49.282 }, 00:10:49.282 "method": "nvmf_subsystem_add_ns", 00:10:49.282 "req_id": 1 00:10:49.282 } 00:10:49.282 Got JSON-RPC error response 00:10:49.282 response: 00:10:49.282 { 00:10:49.282 "code": -32602, 00:10:49.282 "message": "Invalid parameters" 00:10:49.282 } 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:49.282 Adding namespace failed - expected result. 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:49.282 test case2: host connect to nvmf target in multiple paths 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.282 [2024-11-17 02:30:57.603498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.282 02:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:49.849 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:50.783 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:50.783 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:50.783 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:50.783 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:50.783 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:52.680 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:52.680 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:52.680 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:52.680 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:52.680 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:52.680 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:52.680 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:52.680 [global] 00:10:52.680 thread=1 00:10:52.680 invalidate=1 00:10:52.680 rw=write 00:10:52.680 time_based=1 00:10:52.680 runtime=1 00:10:52.680 ioengine=libaio 00:10:52.680 direct=1 00:10:52.680 bs=4096 00:10:52.680 iodepth=1 00:10:52.680 norandommap=0 00:10:52.680 numjobs=1 00:10:52.680 00:10:52.680 verify_dump=1 00:10:52.680 verify_backlog=512 00:10:52.680 verify_state_save=0 00:10:52.680 do_verify=1 00:10:52.680 verify=crc32c-intel 00:10:52.680 [job0] 00:10:52.680 filename=/dev/nvme0n1 00:10:52.680 Could not set queue depth (nvme0n1) 00:10:52.680 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.680 fio-3.35 00:10:52.680 Starting 1 thread 00:10:54.053 00:10:54.053 job0: (groupid=0, jobs=1): err= 0: pid=2880903: Sun Nov 17 02:31:02 2024 00:10:54.053 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:54.053 slat (nsec): min=5011, max=65549, avg=14525.75, stdev=9015.33 00:10:54.053 clat (usec): min=197, max=41018, avg=382.34, stdev=2071.57 00:10:54.053 lat (usec): min=203, max=41051, avg=396.87, stdev=2072.24 00:10:54.053 clat percentiles (usec): 00:10:54.054 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 225], 00:10:54.054 | 30.00th=[ 237], 40.00th=[ 247], 50.00th=[ 265], 60.00th=[ 273], 00:10:54.054 | 70.00th=[ 281], 80.00th=[ 310], 90.00th=[ 359], 95.00th=[ 474], 00:10:54.054 | 99.00th=[ 529], 99.50th=[ 594], 99.90th=[41157], 99.95th=[41157], 00:10:54.054 | 99.99th=[41157] 00:10:54.054 write: IOPS=1740, BW=6961KiB/s (7128kB/s)(6968KiB/1001msec); 0 zone resets 00:10:54.054 slat (nsec): min=6403, max=61408, avg=14889.35, stdev=8777.90 00:10:54.054 clat (usec): min=151, max=1124, avg=201.65, stdev=56.71 00:10:54.054 lat (usec): min=158, max=1165, avg=216.53, stdev=61.59 00:10:54.054 clat percentiles (usec): 00:10:54.054 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 167], 00:10:54.054 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 190], 00:10:54.054 | 70.00th=[ 198], 80.00th=[ 212], 90.00th=[ 277], 95.00th=[ 334], 00:10:54.054 | 99.00th=[ 396], 99.50th=[ 404], 99.90th=[ 486], 99.95th=[ 1123], 00:10:54.054 | 99.99th=[ 1123] 00:10:54.054 bw ( KiB/s): min= 6680, max= 6680, per=95.96%, avg=6680.00, stdev= 0.00, samples=1 00:10:54.054 iops : min= 1670, max= 1670, avg=1670.00, stdev= 0.00, samples=1 00:10:54.054 lat (usec) : 250=66.41%, 500=32.52%, 750=0.92% 00:10:54.054 lat (msec) : 2=0.03%, 50=0.12% 00:10:54.054 cpu : usr=3.00%, sys=4.60%, ctx=3279, majf=0, minf=1 00:10:54.054 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.054 issued rwts: total=1536,1742,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.054 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.054 00:10:54.054 Run status group 0 (all jobs): 00:10:54.054 READ: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:10:54.054 WRITE: bw=6961KiB/s (7128kB/s), 6961KiB/s-6961KiB/s (7128kB/s-7128kB/s), io=6968KiB (7135kB), run=1001-1001msec 00:10:54.054 00:10:54.054 Disk stats (read/write): 00:10:54.054 nvme0n1: ios=1478/1536, merge=0/0, ticks=572/282, in_queue=854, util=91.68% 00:10:54.054 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:54.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:54.312 rmmod nvme_tcp 00:10:54.312 rmmod nvme_fabrics 00:10:54.312 rmmod nvme_keyring 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2880210 ']' 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2880210 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2880210 ']' 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2880210 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2880210 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2880210' 00:10:54.312 killing process with pid 2880210 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2880210 00:10:54.312 02:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2880210 00:10:55.689 02:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:55.689 02:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:55.689 02:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:55.689 02:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:55.689 02:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:55.689 02:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:55.689 02:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:55.689 02:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:55.689 02:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:55.689 02:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.689 02:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.689 02:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.598 02:31:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:57.598 00:10:57.598 real 0m11.980s 00:10:57.598 user 0m28.634s 00:10:57.598 sys 0m2.789s 00:10:57.598 02:31:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.598 02:31:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:57.598 ************************************ 00:10:57.598 END TEST nvmf_nmic 00:10:57.598 ************************************ 00:10:57.598 02:31:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:57.598 02:31:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:57.598 02:31:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.598 02:31:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:57.598 ************************************ 00:10:57.598 START TEST nvmf_fio_target 00:10:57.598 ************************************ 00:10:57.598 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:57.856 * Looking for test storage... 00:10:57.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:57.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.857 --rc genhtml_branch_coverage=1 00:10:57.857 --rc genhtml_function_coverage=1 00:10:57.857 --rc genhtml_legend=1 00:10:57.857 --rc geninfo_all_blocks=1 00:10:57.857 --rc geninfo_unexecuted_blocks=1 00:10:57.857 00:10:57.857 ' 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:57.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.857 --rc genhtml_branch_coverage=1 00:10:57.857 --rc genhtml_function_coverage=1 00:10:57.857 --rc genhtml_legend=1 00:10:57.857 --rc geninfo_all_blocks=1 00:10:57.857 --rc geninfo_unexecuted_blocks=1 00:10:57.857 00:10:57.857 ' 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:57.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.857 --rc genhtml_branch_coverage=1 00:10:57.857 --rc genhtml_function_coverage=1 00:10:57.857 --rc genhtml_legend=1 00:10:57.857 --rc geninfo_all_blocks=1 00:10:57.857 --rc geninfo_unexecuted_blocks=1 00:10:57.857 00:10:57.857 ' 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:57.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.857 --rc genhtml_branch_coverage=1 00:10:57.857 --rc genhtml_function_coverage=1 00:10:57.857 --rc genhtml_legend=1 00:10:57.857 --rc geninfo_all_blocks=1 00:10:57.857 --rc geninfo_unexecuted_blocks=1 00:10:57.857 00:10:57.857 ' 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:57.857 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:57.858 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:57.858 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:57.858 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:57.858 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.858 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:57.858 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:57.858 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:57.858 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.858 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.858 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.858 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:57.858 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:57.858 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:57.858 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:59.761 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:59.761 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:59.761 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:59.761 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:59.762 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:59.762 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:00.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:11:00.021 00:11:00.021 --- 10.0.0.2 ping statistics --- 00:11:00.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.021 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:11:00.021 00:11:00.021 --- 10.0.0.1 ping statistics --- 00:11:00.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.021 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2883184 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2883184 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2883184 ']' 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.021 02:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.021 [2024-11-17 02:31:08.369189] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:00.021 [2024-11-17 02:31:08.369328] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.279 [2024-11-17 02:31:08.512996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.279 [2024-11-17 02:31:08.640034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.279 [2024-11-17 02:31:08.640131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.279 [2024-11-17 02:31:08.640173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.279 [2024-11-17 02:31:08.640195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.279 [2024-11-17 02:31:08.640212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.279 [2024-11-17 02:31:08.643059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.279 [2024-11-17 02:31:08.643137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.279 [2024-11-17 02:31:08.643222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.279 [2024-11-17 02:31:08.643229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.217 02:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.217 02:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:01.217 02:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:01.217 02:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.217 02:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.217 02:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.217 02:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:01.217 [2024-11-17 02:31:09.651651] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.475 02:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:01.734 02:31:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:01.734 02:31:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:01.992 02:31:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:01.992 02:31:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:02.559 02:31:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:02.559 02:31:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:02.822 02:31:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:02.822 02:31:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:03.080 02:31:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:03.338 02:31:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:03.338 02:31:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:03.596 02:31:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:03.596 02:31:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:04.163 02:31:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:04.163 02:31:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:04.421 02:31:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:04.681 02:31:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:04.681 02:31:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:04.938 02:31:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:04.938 02:31:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:05.196 02:31:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.454 [2024-11-17 02:31:13.707895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.454 02:31:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:05.712 02:31:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:05.971 02:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:06.537 02:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:06.537 02:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:06.537 02:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:06.537 02:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:06.537 02:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:06.537 02:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:08.438 02:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:08.438 02:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:08.438 02:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:08.697 02:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:08.697 02:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:08.697 02:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:08.697 02:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:08.697 [global] 00:11:08.697 thread=1 00:11:08.697 invalidate=1 00:11:08.697 rw=write 00:11:08.697 time_based=1 00:11:08.697 runtime=1 00:11:08.697 ioengine=libaio 00:11:08.697 direct=1 00:11:08.697 bs=4096 00:11:08.697 iodepth=1 00:11:08.697 norandommap=0 00:11:08.697 numjobs=1 00:11:08.697 00:11:08.697 verify_dump=1 00:11:08.697 verify_backlog=512 00:11:08.697 verify_state_save=0 00:11:08.697 do_verify=1 00:11:08.697 verify=crc32c-intel 00:11:08.697 [job0] 00:11:08.697 filename=/dev/nvme0n1 00:11:08.697 [job1] 00:11:08.697 filename=/dev/nvme0n2 00:11:08.697 [job2] 00:11:08.697 filename=/dev/nvme0n3 00:11:08.697 [job3] 00:11:08.697 filename=/dev/nvme0n4 00:11:08.697 Could not set queue depth (nvme0n1) 00:11:08.697 Could not set queue depth (nvme0n2) 00:11:08.697 Could not set queue depth (nvme0n3) 00:11:08.697 Could not set queue depth (nvme0n4) 00:11:08.697 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.697 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.697 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.697 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.697 fio-3.35 00:11:08.697 Starting 4 threads 00:11:10.073 00:11:10.074 job0: (groupid=0, jobs=1): err= 0: pid=2884397: Sun Nov 17 02:31:18 2024 00:11:10.074 read: IOPS=505, BW=2023KiB/s (2072kB/s)(2092KiB/1034msec) 00:11:10.074 slat (nsec): min=5213, max=46338, avg=9393.97, stdev=5237.97 00:11:10.074 clat (usec): min=270, max=42195, avg=1359.59, stdev=6182.27 00:11:10.074 lat (usec): min=278, max=42203, avg=1368.98, stdev=6184.93 00:11:10.074 clat percentiles (usec): 00:11:10.074 | 1.00th=[ 277], 5.00th=[ 302], 10.00th=[ 314], 20.00th=[ 338], 00:11:10.074 | 30.00th=[ 347], 40.00th=[ 363], 50.00th=[ 383], 60.00th=[ 441], 00:11:10.074 | 70.00th=[ 469], 80.00th=[ 490], 90.00th=[ 519], 95.00th=[ 586], 00:11:10.074 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:10.074 | 99.99th=[42206] 00:11:10.074 write: IOPS=990, BW=3961KiB/s (4056kB/s)(4096KiB/1034msec); 0 zone resets 00:11:10.074 slat (nsec): min=6938, max=59549, avg=16067.89, stdev=8941.03 00:11:10.074 clat (usec): min=177, max=1303, avg=288.06, stdev=60.25 00:11:10.074 lat (usec): min=186, max=1318, avg=304.13, stdev=62.55 00:11:10.074 clat percentiles (usec): 00:11:10.074 | 1.00th=[ 204], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 251], 00:11:10.074 | 30.00th=[ 262], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 293], 00:11:10.074 | 70.00th=[ 302], 80.00th=[ 318], 90.00th=[ 338], 95.00th=[ 379], 00:11:10.074 | 99.00th=[ 478], 99.50th=[ 506], 99.90th=[ 775], 99.95th=[ 1303], 00:11:10.074 | 99.99th=[ 1303] 00:11:10.074 bw ( KiB/s): min= 1752, max= 6440, per=27.36%, avg=4096.00, stdev=3314.92, samples=2 00:11:10.074 iops : min= 438, max= 1610, avg=1024.00, stdev=828.73, samples=2 00:11:10.074 lat (usec) : 250=13.25%, 500=81.25%, 750=4.52%, 1000=0.06% 00:11:10.074 lat (msec) : 2=0.06%, 4=0.06%, 50=0.78% 00:11:10.074 cpu : usr=1.36%, sys=2.81%, ctx=1547, majf=0, minf=1 00:11:10.074 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.074 issued rwts: total=523,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.074 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.074 job1: (groupid=0, jobs=1): err= 0: pid=2884398: Sun Nov 17 02:31:18 2024 00:11:10.074 read: IOPS=119, BW=479KiB/s (491kB/s)(480KiB/1002msec) 00:11:10.074 slat (nsec): min=7191, max=36168, avg=13434.27, stdev=6870.06 00:11:10.074 clat (usec): min=232, max=41986, avg=7255.18, stdev=15263.55 00:11:10.074 lat (usec): min=244, max=41999, avg=7268.61, stdev=15267.24 00:11:10.074 clat percentiles (usec): 00:11:10.074 | 1.00th=[ 255], 5.00th=[ 273], 10.00th=[ 310], 20.00th=[ 433], 00:11:10.074 | 30.00th=[ 469], 40.00th=[ 490], 50.00th=[ 498], 60.00th=[ 506], 00:11:10.074 | 70.00th=[ 515], 80.00th=[ 537], 90.00th=[41157], 95.00th=[41681], 00:11:10.074 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:10.074 | 99.99th=[42206] 00:11:10.074 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:11:10.074 slat (nsec): min=6847, max=54360, avg=12791.36, stdev=7690.74 00:11:10.074 clat (usec): min=171, max=465, avg=236.60, stdev=64.50 00:11:10.074 lat (usec): min=180, max=489, avg=249.39, stdev=66.56 00:11:10.074 clat percentiles (usec): 00:11:10.074 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 194], 00:11:10.074 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 212], 60.00th=[ 225], 00:11:10.074 | 70.00th=[ 239], 80.00th=[ 260], 90.00th=[ 351], 95.00th=[ 396], 00:11:10.074 | 99.00th=[ 445], 99.50th=[ 453], 99.90th=[ 465], 99.95th=[ 465], 00:11:10.074 | 99.99th=[ 465] 00:11:10.074 bw ( KiB/s): min= 4096, max= 4096, per=27.36%, avg=4096.00, stdev= 0.00, samples=1 00:11:10.074 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:10.074 lat (usec) : 250=61.39%, 500=29.11%, 750=6.33% 00:11:10.074 lat (msec) : 50=3.16% 00:11:10.074 cpu : usr=0.30%, sys=0.80%, ctx=633, majf=0, minf=1 00:11:10.074 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.074 issued rwts: total=120,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.074 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.074 job2: (groupid=0, jobs=1): err= 0: pid=2884399: Sun Nov 17 02:31:18 2024 00:11:10.074 read: IOPS=64, BW=259KiB/s (265kB/s)(268KiB/1036msec) 00:11:10.074 slat (nsec): min=7531, max=58667, avg=27841.07, stdev=10248.17 00:11:10.074 clat (usec): min=393, max=42511, avg=12115.87, stdev=18542.13 00:11:10.074 lat (usec): min=416, max=42544, avg=12143.71, stdev=18539.08 00:11:10.074 clat percentiles (usec): 00:11:10.074 | 1.00th=[ 396], 5.00th=[ 445], 10.00th=[ 453], 20.00th=[ 498], 00:11:10.074 | 30.00th=[ 519], 40.00th=[ 545], 50.00th=[ 578], 60.00th=[ 611], 00:11:10.074 | 70.00th=[ 685], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:11:10.074 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:10.074 | 99.99th=[42730] 00:11:10.074 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:11:10.074 slat (usec): min=6, max=40547, avg=143.67, stdev=2144.53 00:11:10.074 clat (usec): min=182, max=555, avg=286.64, stdev=67.61 00:11:10.074 lat (usec): min=191, max=40772, avg=430.31, stdev=2147.21 00:11:10.074 clat percentiles (usec): 00:11:10.074 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 215], 00:11:10.074 | 30.00th=[ 237], 40.00th=[ 265], 50.00th=[ 285], 60.00th=[ 306], 00:11:10.074 | 70.00th=[ 318], 80.00th=[ 330], 90.00th=[ 392], 95.00th=[ 404], 00:11:10.074 | 99.00th=[ 486], 99.50th=[ 494], 99.90th=[ 553], 99.95th=[ 553], 00:11:10.074 | 99.99th=[ 553] 00:11:10.074 bw ( KiB/s): min= 4096, max= 4096, per=27.36%, avg=4096.00, stdev= 0.00, samples=1 00:11:10.074 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:10.074 lat (usec) : 250=29.36%, 500=61.14%, 750=6.04%, 1000=0.17% 00:11:10.074 lat (msec) : 50=3.28% 00:11:10.074 cpu : usr=0.58%, sys=0.77%, ctx=583, majf=0, minf=1 00:11:10.074 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.074 issued rwts: total=67,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.074 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.074 job3: (groupid=0, jobs=1): err= 0: pid=2884400: Sun Nov 17 02:31:18 2024 00:11:10.074 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:10.074 slat (nsec): min=4975, max=56856, avg=16857.87, stdev=10646.97 00:11:10.074 clat (usec): min=238, max=1802, avg=327.88, stdev=80.14 00:11:10.074 lat (usec): min=246, max=1816, avg=344.73, stdev=84.79 00:11:10.074 clat percentiles (usec): 00:11:10.074 | 1.00th=[ 247], 5.00th=[ 255], 10.00th=[ 260], 20.00th=[ 273], 00:11:10.074 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 318], 00:11:10.074 | 70.00th=[ 347], 80.00th=[ 383], 90.00th=[ 420], 95.00th=[ 486], 00:11:10.074 | 99.00th=[ 545], 99.50th=[ 578], 99.90th=[ 676], 99.95th=[ 1811], 00:11:10.074 | 99.99th=[ 1811] 00:11:10.074 write: IOPS=1827, BW=7309KiB/s (7484kB/s)(7316KiB/1001msec); 0 zone resets 00:11:10.074 slat (nsec): min=6806, max=45166, avg=14653.73, stdev=7001.07 00:11:10.074 clat (usec): min=174, max=513, avg=234.99, stdev=52.66 00:11:10.074 lat (usec): min=183, max=535, avg=249.65, stdev=51.32 00:11:10.074 clat percentiles (usec): 00:11:10.074 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:11:10.074 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 215], 60.00th=[ 227], 00:11:10.074 | 70.00th=[ 245], 80.00th=[ 269], 90.00th=[ 318], 95.00th=[ 355], 00:11:10.074 | 99.00th=[ 392], 99.50th=[ 412], 99.90th=[ 465], 99.95th=[ 515], 00:11:10.074 | 99.99th=[ 515] 00:11:10.074 bw ( KiB/s): min= 8192, max= 8192, per=54.73%, avg=8192.00, stdev= 0.00, samples=1 00:11:10.074 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:10.074 lat (usec) : 250=40.53%, 500=57.50%, 750=1.93% 00:11:10.074 lat (msec) : 2=0.03% 00:11:10.074 cpu : usr=2.80%, sys=5.30%, ctx=3369, majf=0, minf=1 00:11:10.074 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.074 issued rwts: total=1536,1829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.074 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.074 00:11:10.074 Run status group 0 (all jobs): 00:11:10.074 READ: bw=8672KiB/s (8880kB/s), 259KiB/s-6138KiB/s (265kB/s-6285kB/s), io=8984KiB (9200kB), run=1001-1036msec 00:11:10.074 WRITE: bw=14.6MiB/s (15.3MB/s), 1977KiB/s-7309KiB/s (2024kB/s-7484kB/s), io=15.1MiB (15.9MB), run=1001-1036msec 00:11:10.074 00:11:10.074 Disk stats (read/write): 00:11:10.074 nvme0n1: ios=568/1024, merge=0/0, ticks=522/274, in_queue=796, util=87.17% 00:11:10.074 nvme0n2: ios=171/512, merge=0/0, ticks=791/121, in_queue=912, util=91.26% 00:11:10.074 nvme0n3: ios=80/512, merge=0/0, ticks=1513/144, in_queue=1657, util=95.00% 00:11:10.074 nvme0n4: ios=1350/1536, merge=0/0, ticks=778/365, in_queue=1143, util=95.80% 00:11:10.074 02:31:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:10.074 [global] 00:11:10.074 thread=1 00:11:10.074 invalidate=1 00:11:10.074 rw=randwrite 00:11:10.074 time_based=1 00:11:10.074 runtime=1 00:11:10.074 ioengine=libaio 00:11:10.074 direct=1 00:11:10.074 bs=4096 00:11:10.074 iodepth=1 00:11:10.074 norandommap=0 00:11:10.074 numjobs=1 00:11:10.074 00:11:10.074 verify_dump=1 00:11:10.074 verify_backlog=512 00:11:10.074 verify_state_save=0 00:11:10.074 do_verify=1 00:11:10.074 verify=crc32c-intel 00:11:10.074 [job0] 00:11:10.074 filename=/dev/nvme0n1 00:11:10.074 [job1] 00:11:10.074 filename=/dev/nvme0n2 00:11:10.074 [job2] 00:11:10.074 filename=/dev/nvme0n3 00:11:10.074 [job3] 00:11:10.074 filename=/dev/nvme0n4 00:11:10.075 Could not set queue depth (nvme0n1) 00:11:10.075 Could not set queue depth (nvme0n2) 00:11:10.075 Could not set queue depth (nvme0n3) 00:11:10.075 Could not set queue depth (nvme0n4) 00:11:10.333 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.333 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.333 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.333 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.333 fio-3.35 00:11:10.333 Starting 4 threads 00:11:11.711 00:11:11.711 job0: (groupid=0, jobs=1): err= 0: pid=2884635: Sun Nov 17 02:31:19 2024 00:11:11.711 read: IOPS=446, BW=1786KiB/s (1829kB/s)(1788KiB/1001msec) 00:11:11.711 slat (nsec): min=5664, max=53631, avg=13451.96, stdev=11031.86 00:11:11.711 clat (usec): min=234, max=41122, avg=1959.57, stdev=7995.13 00:11:11.711 lat (usec): min=240, max=41128, avg=1973.03, stdev=7997.04 00:11:11.711 clat percentiles (usec): 00:11:11.711 | 1.00th=[ 247], 5.00th=[ 255], 10.00th=[ 265], 20.00th=[ 273], 00:11:11.711 | 30.00th=[ 285], 40.00th=[ 306], 50.00th=[ 322], 60.00th=[ 330], 00:11:11.711 | 70.00th=[ 347], 80.00th=[ 388], 90.00th=[ 408], 95.00th=[ 490], 00:11:11.711 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:11.711 | 99.99th=[41157] 00:11:11.711 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:11.711 slat (nsec): min=7576, max=31127, avg=9477.99, stdev=3320.31 00:11:11.711 clat (usec): min=185, max=500, avg=214.89, stdev=23.05 00:11:11.711 lat (usec): min=193, max=510, avg=224.36, stdev=23.30 00:11:11.711 clat percentiles (usec): 00:11:11.711 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 202], 00:11:11.711 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 212], 60.00th=[ 217], 00:11:11.711 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 231], 95.00th=[ 241], 00:11:11.711 | 99.00th=[ 265], 99.50th=[ 392], 99.90th=[ 502], 99.95th=[ 502], 00:11:11.711 | 99.99th=[ 502] 00:11:11.711 bw ( KiB/s): min= 4096, max= 4096, per=22.13%, avg=4096.00, stdev= 0.00, samples=1 00:11:11.711 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:11.711 lat (usec) : 250=53.60%, 500=44.00%, 750=0.52% 00:11:11.711 lat (msec) : 50=1.88% 00:11:11.711 cpu : usr=0.50%, sys=1.50%, ctx=960, majf=0, minf=2 00:11:11.711 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.711 issued rwts: total=447,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.711 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.711 job1: (groupid=0, jobs=1): err= 0: pid=2884636: Sun Nov 17 02:31:19 2024 00:11:11.711 read: IOPS=1552, BW=6210KiB/s (6359kB/s)(6216KiB/1001msec) 00:11:11.711 slat (nsec): min=5592, max=61554, avg=11228.02, stdev=5824.14 00:11:11.711 clat (usec): min=225, max=40568, avg=311.86, stdev=1031.50 00:11:11.711 lat (usec): min=231, max=40574, avg=323.09, stdev=1031.53 00:11:11.711 clat percentiles (usec): 00:11:11.711 | 1.00th=[ 239], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 258], 00:11:11.711 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:11:11.711 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 326], 00:11:11.711 | 99.00th=[ 474], 99.50th=[ 537], 99.90th=[ 5473], 99.95th=[40633], 00:11:11.711 | 99.99th=[40633] 00:11:11.711 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:11.711 slat (nsec): min=6151, max=64728, avg=15866.06, stdev=7530.51 00:11:11.711 clat (usec): min=168, max=993, avg=220.35, stdev=36.99 00:11:11.711 lat (usec): min=175, max=1005, avg=236.22, stdev=40.16 00:11:11.711 clat percentiles (usec): 00:11:11.711 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 196], 00:11:11.711 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 223], 00:11:11.711 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 269], 00:11:11.711 | 99.00th=[ 297], 99.50th=[ 318], 99.90th=[ 766], 99.95th=[ 799], 00:11:11.711 | 99.99th=[ 996] 00:11:11.711 bw ( KiB/s): min= 8192, max= 8192, per=44.27%, avg=8192.00, stdev= 0.00, samples=1 00:11:11.711 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:11.711 lat (usec) : 250=54.61%, 500=44.95%, 750=0.25%, 1000=0.08% 00:11:11.711 lat (msec) : 2=0.06%, 10=0.03%, 50=0.03% 00:11:11.711 cpu : usr=3.40%, sys=6.70%, ctx=3604, majf=0, minf=1 00:11:11.711 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.711 issued rwts: total=1554,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.711 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.711 job2: (groupid=0, jobs=1): err= 0: pid=2884637: Sun Nov 17 02:31:19 2024 00:11:11.711 read: IOPS=22, BW=91.5KiB/s (93.6kB/s)(92.0KiB/1006msec) 00:11:11.711 slat (nsec): min=7036, max=35891, avg=20500.78, stdev=9585.74 00:11:11.711 clat (usec): min=415, max=41987, avg=36297.89, stdev=12403.73 00:11:11.711 lat (usec): min=423, max=42005, avg=36318.39, stdev=12407.99 00:11:11.711 clat percentiles (usec): 00:11:11.711 | 1.00th=[ 416], 5.00th=[ 537], 10.00th=[16581], 20.00th=[40633], 00:11:11.711 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:11.711 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:11.711 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:11.711 | 99.99th=[42206] 00:11:11.711 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:11:11.711 slat (nsec): min=6742, max=41719, avg=12295.51, stdev=4544.02 00:11:11.711 clat (usec): min=184, max=3468, avg=316.88, stdev=155.28 00:11:11.711 lat (usec): min=191, max=3484, avg=329.18, stdev=155.19 00:11:11.711 clat percentiles (usec): 00:11:11.711 | 1.00th=[ 190], 5.00th=[ 208], 10.00th=[ 225], 20.00th=[ 245], 00:11:11.711 | 30.00th=[ 258], 40.00th=[ 297], 50.00th=[ 314], 60.00th=[ 326], 00:11:11.711 | 70.00th=[ 338], 80.00th=[ 396], 90.00th=[ 400], 95.00th=[ 408], 00:11:11.711 | 99.00th=[ 474], 99.50th=[ 519], 99.90th=[ 3458], 99.95th=[ 3458], 00:11:11.711 | 99.99th=[ 3458] 00:11:11.711 bw ( KiB/s): min= 4096, max= 4096, per=22.13%, avg=4096.00, stdev= 0.00, samples=1 00:11:11.711 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:11.711 lat (usec) : 250=24.49%, 500=70.84%, 750=0.56% 00:11:11.711 lat (msec) : 4=0.19%, 20=0.19%, 50=3.74% 00:11:11.711 cpu : usr=0.70%, sys=0.50%, ctx=536, majf=0, minf=2 00:11:11.711 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.711 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.711 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.711 job3: (groupid=0, jobs=1): err= 0: pid=2884638: Sun Nov 17 02:31:19 2024 00:11:11.712 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:11.712 slat (nsec): min=5772, max=35619, avg=11128.08, stdev=5009.04 00:11:11.712 clat (usec): min=275, max=498, avg=337.61, stdev=45.05 00:11:11.712 lat (usec): min=282, max=504, avg=348.74, stdev=45.06 00:11:11.712 clat percentiles (usec): 00:11:11.712 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 306], 00:11:11.712 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 334], 00:11:11.712 | 70.00th=[ 343], 80.00th=[ 355], 90.00th=[ 388], 95.00th=[ 461], 00:11:11.712 | 99.00th=[ 478], 99.50th=[ 486], 99.90th=[ 498], 99.95th=[ 498], 00:11:11.712 | 99.99th=[ 498] 00:11:11.712 write: IOPS=1580, BW=6322KiB/s (6473kB/s)(6328KiB/1001msec); 0 zone resets 00:11:11.712 slat (nsec): min=7822, max=45525, avg=18124.88, stdev=6582.37 00:11:11.712 clat (usec): min=191, max=472, avg=267.11, stdev=51.90 00:11:11.712 lat (usec): min=201, max=493, avg=285.23, stdev=50.07 00:11:11.712 clat percentiles (usec): 00:11:11.712 | 1.00th=[ 204], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 233], 00:11:11.712 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:11:11.712 | 70.00th=[ 265], 80.00th=[ 310], 90.00th=[ 343], 95.00th=[ 396], 00:11:11.712 | 99.00th=[ 420], 99.50th=[ 449], 99.90th=[ 469], 99.95th=[ 474], 00:11:11.712 | 99.99th=[ 474] 00:11:11.712 bw ( KiB/s): min= 8192, max= 8192, per=44.27%, avg=8192.00, stdev= 0.00, samples=1 00:11:11.712 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:11.712 lat (usec) : 250=27.45%, 500=72.55% 00:11:11.712 cpu : usr=2.90%, sys=6.50%, ctx=3119, majf=0, minf=1 00:11:11.712 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.712 issued rwts: total=1536,1582,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.712 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.712 00:11:11.712 Run status group 0 (all jobs): 00:11:11.712 READ: bw=13.8MiB/s (14.5MB/s), 91.5KiB/s-6210KiB/s (93.6kB/s-6359kB/s), io=13.9MiB (14.6MB), run=1001-1006msec 00:11:11.712 WRITE: bw=18.1MiB/s (18.9MB/s), 2036KiB/s-8184KiB/s (2085kB/s-8380kB/s), io=18.2MiB (19.1MB), run=1001-1006msec 00:11:11.712 00:11:11.712 Disk stats (read/write): 00:11:11.712 nvme0n1: ios=218/512, merge=0/0, ticks=1696/107, in_queue=1803, util=96.99% 00:11:11.712 nvme0n2: ios=1520/1536, merge=0/0, ticks=1432/318, in_queue=1750, util=98.48% 00:11:11.712 nvme0n3: ios=76/512, merge=0/0, ticks=987/158, in_queue=1145, util=98.44% 00:11:11.712 nvme0n4: ios=1220/1536, merge=0/0, ticks=854/383, in_queue=1237, util=97.07% 00:11:11.712 02:31:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:11.712 [global] 00:11:11.712 thread=1 00:11:11.712 invalidate=1 00:11:11.712 rw=write 00:11:11.712 time_based=1 00:11:11.712 runtime=1 00:11:11.712 ioengine=libaio 00:11:11.712 direct=1 00:11:11.712 bs=4096 00:11:11.712 iodepth=128 00:11:11.712 norandommap=0 00:11:11.712 numjobs=1 00:11:11.712 00:11:11.712 verify_dump=1 00:11:11.712 verify_backlog=512 00:11:11.712 verify_state_save=0 00:11:11.712 do_verify=1 00:11:11.712 verify=crc32c-intel 00:11:11.712 [job0] 00:11:11.712 filename=/dev/nvme0n1 00:11:11.712 [job1] 00:11:11.712 filename=/dev/nvme0n2 00:11:11.712 [job2] 00:11:11.712 filename=/dev/nvme0n3 00:11:11.712 [job3] 00:11:11.712 filename=/dev/nvme0n4 00:11:11.712 Could not set queue depth (nvme0n1) 00:11:11.712 Could not set queue depth (nvme0n2) 00:11:11.712 Could not set queue depth (nvme0n3) 00:11:11.712 Could not set queue depth (nvme0n4) 00:11:11.712 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.712 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.712 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.712 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.712 fio-3.35 00:11:11.712 Starting 4 threads 00:11:13.093 00:11:13.093 job0: (groupid=0, jobs=1): err= 0: pid=2884868: Sun Nov 17 02:31:21 2024 00:11:13.093 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:11:13.093 slat (usec): min=3, max=30996, avg=156.67, stdev=1127.99 00:11:13.093 clat (msec): min=8, max=110, avg=18.89, stdev=17.30 00:11:13.093 lat (msec): min=8, max=110, avg=19.05, stdev=17.45 00:11:13.093 clat percentiles (msec): 00:11:13.093 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 13], 20.00th=[ 14], 00:11:13.093 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 14], 00:11:13.093 | 70.00th=[ 16], 80.00th=[ 18], 90.00th=[ 23], 95.00th=[ 64], 00:11:13.093 | 99.00th=[ 99], 99.50th=[ 101], 99.90th=[ 106], 99.95th=[ 106], 00:11:13.093 | 99.99th=[ 111] 00:11:13.093 write: IOPS=3480, BW=13.6MiB/s (14.3MB/s)(13.8MiB/1012msec); 0 zone resets 00:11:13.093 slat (usec): min=4, max=17200, avg=137.40, stdev=771.78 00:11:13.093 clat (msec): min=7, max=101, avg=19.85, stdev=15.00 00:11:13.093 lat (msec): min=7, max=101, avg=19.99, stdev=15.08 00:11:13.093 clat percentiles (msec): 00:11:13.093 | 1.00th=[ 10], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 14], 00:11:13.093 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 15], 00:11:13.093 | 70.00th=[ 16], 80.00th=[ 21], 90.00th=[ 36], 95.00th=[ 54], 00:11:13.093 | 99.00th=[ 94], 99.50th=[ 99], 99.90th=[ 102], 99.95th=[ 102], 00:11:13.093 | 99.99th=[ 102] 00:11:13.093 bw ( KiB/s): min= 7696, max=19503, per=23.80%, avg=13599.50, stdev=8348.81, samples=2 00:11:13.093 iops : min= 1924, max= 4875, avg=3399.50, stdev=2086.67, samples=2 00:11:13.093 lat (msec) : 10=2.29%, 20=79.21%, 50=13.09%, 100=5.07%, 250=0.35% 00:11:13.093 cpu : usr=4.45%, sys=7.02%, ctx=391, majf=0, minf=1 00:11:13.093 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:13.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.093 issued rwts: total=3072,3522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.093 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.093 job1: (groupid=0, jobs=1): err= 0: pid=2884869: Sun Nov 17 02:31:21 2024 00:11:13.093 read: IOPS=3315, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1004msec) 00:11:13.093 slat (usec): min=2, max=20588, avg=136.30, stdev=920.43 00:11:13.093 clat (usec): min=479, max=78660, avg=15920.42, stdev=8903.43 00:11:13.093 lat (usec): min=3721, max=78695, avg=16056.73, stdev=8995.89 00:11:13.093 clat percentiles (usec): 00:11:13.093 | 1.00th=[ 5276], 5.00th=[ 9503], 10.00th=[10552], 20.00th=[12518], 00:11:13.093 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13960], 60.00th=[14222], 00:11:13.093 | 70.00th=[14615], 80.00th=[17171], 90.00th=[21890], 95.00th=[25560], 00:11:13.093 | 99.00th=[72877], 99.50th=[72877], 99.90th=[73925], 99.95th=[73925], 00:11:13.093 | 99.99th=[79168] 00:11:13.093 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:11:13.093 slat (usec): min=3, max=33016, avg=145.02, stdev=997.39 00:11:13.093 clat (usec): min=3064, max=83132, avg=20598.69, stdev=15781.82 00:11:13.093 lat (usec): min=3076, max=83146, avg=20743.71, stdev=15873.00 00:11:13.093 clat percentiles (usec): 00:11:13.093 | 1.00th=[ 4948], 5.00th=[ 9110], 10.00th=[10421], 20.00th=[12518], 00:11:13.093 | 30.00th=[12780], 40.00th=[13435], 50.00th=[13829], 60.00th=[14484], 00:11:13.093 | 70.00th=[17433], 80.00th=[28443], 90.00th=[43254], 95.00th=[57934], 00:11:13.093 | 99.00th=[79168], 99.50th=[79168], 99.90th=[79168], 99.95th=[79168], 00:11:13.093 | 99.99th=[83362] 00:11:13.093 bw ( KiB/s): min=10192, max=18480, per=25.09%, avg=14336.00, stdev=5860.50, samples=2 00:11:13.093 iops : min= 2548, max= 4620, avg=3584.00, stdev=1465.13, samples=2 00:11:13.093 lat (usec) : 500=0.01% 00:11:13.093 lat (msec) : 4=0.75%, 10=6.28%, 20=71.81%, 50=15.61%, 100=5.54% 00:11:13.093 cpu : usr=2.79%, sys=4.89%, ctx=335, majf=0, minf=1 00:11:13.093 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:13.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.093 issued rwts: total=3329,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.093 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.093 job2: (groupid=0, jobs=1): err= 0: pid=2884870: Sun Nov 17 02:31:21 2024 00:11:13.093 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:11:13.093 slat (usec): min=3, max=17788, avg=151.21, stdev=1118.81 00:11:13.093 clat (usec): min=5155, max=41780, avg=18006.26, stdev=4946.85 00:11:13.093 lat (usec): min=5163, max=46560, avg=18157.47, stdev=5058.50 00:11:13.094 clat percentiles (usec): 00:11:13.094 | 1.00th=[ 6325], 5.00th=[13698], 10.00th=[14222], 20.00th=[14615], 00:11:13.094 | 30.00th=[14877], 40.00th=[15795], 50.00th=[16450], 60.00th=[17957], 00:11:13.094 | 70.00th=[19268], 80.00th=[21365], 90.00th=[24773], 95.00th=[27657], 00:11:13.094 | 99.00th=[34866], 99.50th=[38536], 99.90th=[41681], 99.95th=[41681], 00:11:13.094 | 99.99th=[41681] 00:11:13.094 write: IOPS=3732, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1009msec); 0 zone resets 00:11:13.094 slat (usec): min=4, max=11927, avg=114.68, stdev=549.53 00:11:13.094 clat (usec): min=1868, max=41786, avg=16845.23, stdev=6150.77 00:11:13.094 lat (usec): min=1878, max=41796, avg=16959.91, stdev=6195.33 00:11:13.094 clat percentiles (usec): 00:11:13.094 | 1.00th=[ 3458], 5.00th=[ 7373], 10.00th=[10421], 20.00th=[14222], 00:11:13.094 | 30.00th=[15270], 40.00th=[15533], 50.00th=[15795], 60.00th=[16319], 00:11:13.094 | 70.00th=[16712], 80.00th=[17957], 90.00th=[28443], 95.00th=[29230], 00:11:13.094 | 99.00th=[35914], 99.50th=[36439], 99.90th=[40109], 99.95th=[41681], 00:11:13.094 | 99.99th=[41681] 00:11:13.094 bw ( KiB/s): min=12728, max=16384, per=25.48%, avg=14556.00, stdev=2585.18, samples=2 00:11:13.094 iops : min= 3182, max= 4096, avg=3639.00, stdev=646.30, samples=2 00:11:13.094 lat (msec) : 2=0.15%, 4=0.76%, 10=5.21%, 20=73.29%, 50=20.59% 00:11:13.094 cpu : usr=3.27%, sys=5.06%, ctx=448, majf=0, minf=1 00:11:13.094 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:13.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.094 issued rwts: total=3584,3766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.094 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.094 job3: (groupid=0, jobs=1): err= 0: pid=2884871: Sun Nov 17 02:31:21 2024 00:11:13.094 read: IOPS=3370, BW=13.2MiB/s (13.8MB/s)(13.3MiB/1011msec) 00:11:13.094 slat (usec): min=3, max=16098, avg=148.31, stdev=1001.69 00:11:13.094 clat (usec): min=3981, max=49256, avg=18211.43, stdev=6493.43 00:11:13.094 lat (usec): min=5978, max=49284, avg=18359.74, stdev=6551.32 00:11:13.094 clat percentiles (usec): 00:11:13.094 | 1.00th=[ 7635], 5.00th=[11469], 10.00th=[13042], 20.00th=[14353], 00:11:13.094 | 30.00th=[14615], 40.00th=[15664], 50.00th=[16450], 60.00th=[17433], 00:11:13.094 | 70.00th=[18744], 80.00th=[20841], 90.00th=[26608], 95.00th=[30540], 00:11:13.094 | 99.00th=[45351], 99.50th=[47449], 99.90th=[49021], 99.95th=[49021], 00:11:13.094 | 99.99th=[49021] 00:11:13.094 write: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec); 0 zone resets 00:11:13.094 slat (usec): min=5, max=15244, avg=126.77, stdev=679.23 00:11:13.094 clat (usec): min=4658, max=49280, avg=18206.76, stdev=7287.07 00:11:13.094 lat (usec): min=4678, max=49291, avg=18333.54, stdev=7356.62 00:11:13.094 clat percentiles (usec): 00:11:13.094 | 1.00th=[ 5997], 5.00th=[ 8356], 10.00th=[10683], 20.00th=[14353], 00:11:13.094 | 30.00th=[15008], 40.00th=[15664], 50.00th=[16188], 60.00th=[16909], 00:11:13.094 | 70.00th=[17695], 80.00th=[23725], 90.00th=[30016], 95.00th=[33817], 00:11:13.094 | 99.00th=[37487], 99.50th=[40633], 99.90th=[41157], 99.95th=[49021], 00:11:13.094 | 99.99th=[49021] 00:11:13.094 bw ( KiB/s): min=12288, max=16384, per=25.09%, avg=14336.00, stdev=2896.31, samples=2 00:11:13.094 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:11:13.094 lat (msec) : 4=0.01%, 10=5.05%, 20=73.13%, 50=21.81% 00:11:13.094 cpu : usr=5.64%, sys=8.81%, ctx=388, majf=0, minf=1 00:11:13.094 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:13.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.094 issued rwts: total=3408,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.094 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.094 00:11:13.094 Run status group 0 (all jobs): 00:11:13.094 READ: bw=51.7MiB/s (54.2MB/s), 11.9MiB/s-13.9MiB/s (12.4MB/s-14.5MB/s), io=52.3MiB (54.9MB), run=1004-1012msec 00:11:13.094 WRITE: bw=55.8MiB/s (58.5MB/s), 13.6MiB/s-14.6MiB/s (14.3MB/s-15.3MB/s), io=56.5MiB (59.2MB), run=1004-1012msec 00:11:13.094 00:11:13.094 Disk stats (read/write): 00:11:13.094 nvme0n1: ios=3120/3072, merge=0/0, ticks=25320/21384, in_queue=46704, util=88.78% 00:11:13.094 nvme0n2: ios=2580/2784, merge=0/0, ticks=22458/24381, in_queue=46839, util=86.57% 00:11:13.094 nvme0n3: ios=3072/3239, merge=0/0, ticks=53004/49272, in_queue=102276, util=88.67% 00:11:13.094 nvme0n4: ios=2617/3007, merge=0/0, ticks=47310/55850, in_queue=103160, util=97.25% 00:11:13.094 02:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:13.094 [global] 00:11:13.094 thread=1 00:11:13.094 invalidate=1 00:11:13.094 rw=randwrite 00:11:13.094 time_based=1 00:11:13.094 runtime=1 00:11:13.094 ioengine=libaio 00:11:13.094 direct=1 00:11:13.094 bs=4096 00:11:13.094 iodepth=128 00:11:13.094 norandommap=0 00:11:13.094 numjobs=1 00:11:13.094 00:11:13.094 verify_dump=1 00:11:13.094 verify_backlog=512 00:11:13.094 verify_state_save=0 00:11:13.094 do_verify=1 00:11:13.094 verify=crc32c-intel 00:11:13.094 [job0] 00:11:13.094 filename=/dev/nvme0n1 00:11:13.094 [job1] 00:11:13.094 filename=/dev/nvme0n2 00:11:13.094 [job2] 00:11:13.094 filename=/dev/nvme0n3 00:11:13.094 [job3] 00:11:13.094 filename=/dev/nvme0n4 00:11:13.094 Could not set queue depth (nvme0n1) 00:11:13.094 Could not set queue depth (nvme0n2) 00:11:13.094 Could not set queue depth (nvme0n3) 00:11:13.094 Could not set queue depth (nvme0n4) 00:11:13.353 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.353 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.353 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.353 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.353 fio-3.35 00:11:13.353 Starting 4 threads 00:11:14.729 00:11:14.729 job0: (groupid=0, jobs=1): err= 0: pid=2885098: Sun Nov 17 02:31:22 2024 00:11:14.729 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:11:14.729 slat (usec): min=2, max=23401, avg=142.79, stdev=958.81 00:11:14.729 clat (usec): min=8839, max=71886, avg=18678.57, stdev=11452.85 00:11:14.729 lat (usec): min=8844, max=71889, avg=18821.36, stdev=11505.48 00:11:14.729 clat percentiles (usec): 00:11:14.729 | 1.00th=[ 9634], 5.00th=[11338], 10.00th=[11731], 20.00th=[13698], 00:11:14.729 | 30.00th=[14222], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:11:14.729 | 70.00th=[16450], 80.00th=[19530], 90.00th=[26870], 95.00th=[42206], 00:11:14.729 | 99.00th=[71828], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:11:14.729 | 99.99th=[71828] 00:11:14.729 write: IOPS=3543, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:11:14.729 slat (usec): min=3, max=34116, avg=148.93, stdev=1035.26 00:11:14.729 clat (usec): min=6087, max=83591, avg=18902.94, stdev=11746.88 00:11:14.729 lat (usec): min=6098, max=83605, avg=19051.86, stdev=11814.37 00:11:14.729 clat percentiles (usec): 00:11:14.729 | 1.00th=[ 9372], 5.00th=[11338], 10.00th=[11863], 20.00th=[12649], 00:11:14.729 | 30.00th=[13960], 40.00th=[14746], 50.00th=[15270], 60.00th=[15926], 00:11:14.729 | 70.00th=[18744], 80.00th=[21103], 90.00th=[24773], 95.00th=[43779], 00:11:14.729 | 99.00th=[69731], 99.50th=[83362], 99.90th=[83362], 99.95th=[83362], 00:11:14.729 | 99.99th=[83362] 00:11:14.729 bw ( KiB/s): min=10240, max=17344, per=22.52%, avg=13792.00, stdev=5023.29, samples=2 00:11:14.729 iops : min= 2560, max= 4336, avg=3448.00, stdev=1255.82, samples=2 00:11:14.729 lat (msec) : 10=1.38%, 20=75.63%, 50=18.46%, 100=4.53% 00:11:14.729 cpu : usr=3.08%, sys=5.65%, ctx=295, majf=0, minf=1 00:11:14.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:14.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.729 issued rwts: total=3072,3575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.729 job1: (groupid=0, jobs=1): err= 0: pid=2885108: Sun Nov 17 02:31:22 2024 00:11:14.729 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:11:14.729 slat (usec): min=3, max=12339, avg=114.71, stdev=693.87 00:11:14.729 clat (usec): min=5680, max=39296, avg=14748.41, stdev=3340.18 00:11:14.729 lat (usec): min=5688, max=39312, avg=14863.13, stdev=3396.34 00:11:14.729 clat percentiles (usec): 00:11:14.729 | 1.00th=[ 8979], 5.00th=[10814], 10.00th=[12780], 20.00th=[13435], 00:11:14.729 | 30.00th=[13698], 40.00th=[13829], 50.00th=[14091], 60.00th=[14484], 00:11:14.729 | 70.00th=[14746], 80.00th=[15664], 90.00th=[16909], 95.00th=[20841], 00:11:14.729 | 99.00th=[31589], 99.50th=[36963], 99.90th=[39060], 99.95th=[39060], 00:11:14.729 | 99.99th=[39060] 00:11:14.729 write: IOPS=4161, BW=16.3MiB/s (17.0MB/s)(16.4MiB/1007msec); 0 zone resets 00:11:14.729 slat (usec): min=4, max=10683, avg=109.44, stdev=600.90 00:11:14.729 clat (usec): min=2557, max=54237, avg=16001.91, stdev=7658.81 00:11:14.729 lat (usec): min=2575, max=54272, avg=16111.35, stdev=7711.58 00:11:14.729 clat percentiles (usec): 00:11:14.729 | 1.00th=[ 3425], 5.00th=[ 7242], 10.00th=[11207], 20.00th=[12911], 00:11:14.729 | 30.00th=[13304], 40.00th=[13698], 50.00th=[13829], 60.00th=[14091], 00:11:14.729 | 70.00th=[14484], 80.00th=[15401], 90.00th=[27395], 95.00th=[33817], 00:11:14.729 | 99.00th=[45876], 99.50th=[47973], 99.90th=[50594], 99.95th=[50594], 00:11:14.729 | 99.99th=[54264] 00:11:14.729 bw ( KiB/s): min=16376, max=16392, per=26.76%, avg=16384.00, stdev=11.31, samples=2 00:11:14.729 iops : min= 4094, max= 4098, avg=4096.00, stdev= 2.83, samples=2 00:11:14.729 lat (msec) : 4=0.65%, 10=4.92%, 20=83.77%, 50=10.57%, 100=0.08% 00:11:14.729 cpu : usr=6.76%, sys=11.73%, ctx=350, majf=0, minf=1 00:11:14.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:14.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.729 issued rwts: total=4096,4191,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.729 job2: (groupid=0, jobs=1): err= 0: pid=2885133: Sun Nov 17 02:31:22 2024 00:11:14.729 read: IOPS=4037, BW=15.8MiB/s (16.5MB/s)(15.8MiB/1002msec) 00:11:14.729 slat (usec): min=3, max=13322, avg=120.52, stdev=683.07 00:11:14.729 clat (usec): min=1193, max=27703, avg=15845.12, stdev=2919.71 00:11:14.729 lat (usec): min=3289, max=28417, avg=15965.64, stdev=2962.44 00:11:14.729 clat percentiles (usec): 00:11:14.729 | 1.00th=[ 4228], 5.00th=[12125], 10.00th=[13435], 20.00th=[14222], 00:11:14.729 | 30.00th=[14877], 40.00th=[15533], 50.00th=[15926], 60.00th=[16188], 00:11:14.729 | 70.00th=[16712], 80.00th=[17433], 90.00th=[18744], 95.00th=[20841], 00:11:14.729 | 99.00th=[24511], 99.50th=[26084], 99.90th=[27657], 99.95th=[27657], 00:11:14.729 | 99.99th=[27657] 00:11:14.729 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:11:14.729 slat (usec): min=5, max=11812, avg=108.96, stdev=569.52 00:11:14.729 clat (usec): min=2008, max=28435, avg=15323.07, stdev=2964.84 00:11:14.729 lat (usec): min=2022, max=28458, avg=15432.03, stdev=2984.57 00:11:14.729 clat percentiles (usec): 00:11:14.729 | 1.00th=[ 4146], 5.00th=[ 9110], 10.00th=[11731], 20.00th=[14353], 00:11:14.729 | 30.00th=[15008], 40.00th=[15664], 50.00th=[16057], 60.00th=[16319], 00:11:14.729 | 70.00th=[16450], 80.00th=[16712], 90.00th=[17433], 95.00th=[19792], 00:11:14.729 | 99.00th=[21103], 99.50th=[21365], 99.90th=[28443], 99.95th=[28443], 00:11:14.729 | 99.99th=[28443] 00:11:14.729 bw ( KiB/s): min=16288, max=16480, per=26.76%, avg=16384.00, stdev=135.76, samples=2 00:11:14.729 iops : min= 4072, max= 4120, avg=4096.00, stdev=33.94, samples=2 00:11:14.729 lat (msec) : 2=0.01%, 4=0.71%, 10=4.47%, 20=90.01%, 50=4.79% 00:11:14.729 cpu : usr=8.09%, sys=11.49%, ctx=440, majf=0, minf=1 00:11:14.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:14.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.729 issued rwts: total=4046,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.729 job3: (groupid=0, jobs=1): err= 0: pid=2885148: Sun Nov 17 02:31:22 2024 00:11:14.729 read: IOPS=3387, BW=13.2MiB/s (13.9MB/s)(13.3MiB/1006msec) 00:11:14.729 slat (usec): min=2, max=15805, avg=156.63, stdev=936.46 00:11:14.729 clat (usec): min=5026, max=68404, avg=20208.52, stdev=9791.35 00:11:14.729 lat (usec): min=5034, max=68412, avg=20365.15, stdev=9843.08 00:11:14.729 clat percentiles (usec): 00:11:14.729 | 1.00th=[12125], 5.00th=[13304], 10.00th=[14091], 20.00th=[15533], 00:11:14.729 | 30.00th=[16057], 40.00th=[16450], 50.00th=[16909], 60.00th=[17433], 00:11:14.729 | 70.00th=[19268], 80.00th=[22152], 90.00th=[28443], 95.00th=[41681], 00:11:14.729 | 99.00th=[62653], 99.50th=[66323], 99.90th=[68682], 99.95th=[68682], 00:11:14.729 | 99.99th=[68682] 00:11:14.729 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:11:14.729 slat (usec): min=4, max=15957, avg=118.79, stdev=647.33 00:11:14.729 clat (usec): min=6051, max=34906, avg=16287.70, stdev=2427.92 00:11:14.729 lat (usec): min=6060, max=34948, avg=16406.49, stdev=2469.22 00:11:14.729 clat percentiles (usec): 00:11:14.729 | 1.00th=[ 7767], 5.00th=[12387], 10.00th=[13566], 20.00th=[14484], 00:11:14.729 | 30.00th=[15664], 40.00th=[16188], 50.00th=[16581], 60.00th=[16909], 00:11:14.729 | 70.00th=[17695], 80.00th=[17957], 90.00th=[19006], 95.00th=[19268], 00:11:14.729 | 99.00th=[20317], 99.50th=[22414], 99.90th=[31327], 99.95th=[32113], 00:11:14.729 | 99.99th=[34866] 00:11:14.729 bw ( KiB/s): min=12288, max=16384, per=23.41%, avg=14336.00, stdev=2896.31, samples=2 00:11:14.729 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:11:14.729 lat (msec) : 10=1.83%, 20=85.04%, 50=11.31%, 100=1.82% 00:11:14.729 cpu : usr=3.88%, sys=8.46%, ctx=395, majf=0, minf=1 00:11:14.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:14.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.729 issued rwts: total=3408,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.729 00:11:14.729 Run status group 0 (all jobs): 00:11:14.729 READ: bw=56.6MiB/s (59.4MB/s), 11.9MiB/s-15.9MiB/s (12.5MB/s-16.7MB/s), io=57.1MiB (59.9MB), run=1002-1009msec 00:11:14.729 WRITE: bw=59.8MiB/s (62.7MB/s), 13.8MiB/s-16.3MiB/s (14.5MB/s-17.0MB/s), io=60.3MiB (63.3MB), run=1002-1009msec 00:11:14.729 00:11:14.729 Disk stats (read/write): 00:11:14.729 nvme0n1: ios=2610/2799, merge=0/0, ticks=15472/15028, in_queue=30500, util=86.37% 00:11:14.729 nvme0n2: ios=3239/3584, merge=0/0, ticks=32022/41990, in_queue=74012, util=100.00% 00:11:14.729 nvme0n3: ios=3366/3584, merge=0/0, ticks=29177/30184, in_queue=59361, util=95.81% 00:11:14.729 nvme0n4: ios=2848/3072, merge=0/0, ticks=31143/30452, in_queue=61595, util=95.77% 00:11:14.730 02:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:14.730 02:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2885325 00:11:14.730 02:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:14.730 02:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:14.730 [global] 00:11:14.730 thread=1 00:11:14.730 invalidate=1 00:11:14.730 rw=read 00:11:14.730 time_based=1 00:11:14.730 runtime=10 00:11:14.730 ioengine=libaio 00:11:14.730 direct=1 00:11:14.730 bs=4096 00:11:14.730 iodepth=1 00:11:14.730 norandommap=1 00:11:14.730 numjobs=1 00:11:14.730 00:11:14.730 [job0] 00:11:14.730 filename=/dev/nvme0n1 00:11:14.730 [job1] 00:11:14.730 filename=/dev/nvme0n2 00:11:14.730 [job2] 00:11:14.730 filename=/dev/nvme0n3 00:11:14.730 [job3] 00:11:14.730 filename=/dev/nvme0n4 00:11:14.730 Could not set queue depth (nvme0n1) 00:11:14.730 Could not set queue depth (nvme0n2) 00:11:14.730 Could not set queue depth (nvme0n3) 00:11:14.730 Could not set queue depth (nvme0n4) 00:11:14.730 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.730 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.730 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.730 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.730 fio-3.35 00:11:14.730 Starting 4 threads 00:11:18.070 02:31:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:18.070 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=22904832, buflen=4096 00:11:18.070 fio: pid=2885452, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:18.070 02:31:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:18.070 02:31:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:18.070 02:31:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:18.070 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=569344, buflen=4096 00:11:18.070 fio: pid=2885451, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:18.328 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=42004480, buflen=4096 00:11:18.328 fio: pid=2885448, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:18.328 02:31:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:18.328 02:31:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:18.896 02:31:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:18.896 02:31:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:18.896 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=49311744, buflen=4096 00:11:18.896 fio: pid=2885449, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:18.896 00:11:18.896 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2885448: Sun Nov 17 02:31:27 2024 00:11:18.896 read: IOPS=2902, BW=11.3MiB/s (11.9MB/s)(40.1MiB/3533msec) 00:11:18.896 slat (usec): min=4, max=11681, avg=13.98, stdev=132.82 00:11:18.896 clat (usec): min=211, max=41959, avg=325.14, stdev=1371.18 00:11:18.896 lat (usec): min=216, max=41994, avg=339.13, stdev=1378.18 00:11:18.896 clat percentiles (usec): 00:11:18.896 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 245], 00:11:18.896 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 277], 00:11:18.896 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[ 330], 95.00th=[ 347], 00:11:18.896 | 99.00th=[ 502], 99.50th=[ 537], 99.90th=[41157], 99.95th=[41157], 00:11:18.896 | 99.99th=[41681] 00:11:18.896 bw ( KiB/s): min= 440, max=14224, per=39.62%, avg=11324.00, stdev=5357.74, samples=6 00:11:18.896 iops : min= 110, max= 3556, avg=2831.00, stdev=1339.43, samples=6 00:11:18.896 lat (usec) : 250=29.00%, 500=69.95%, 750=0.89%, 1000=0.01% 00:11:18.896 lat (msec) : 2=0.01%, 10=0.02%, 50=0.12% 00:11:18.896 cpu : usr=1.73%, sys=4.59%, ctx=10258, majf=0, minf=1 00:11:18.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.896 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.896 issued rwts: total=10256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.896 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2885449: Sun Nov 17 02:31:27 2024 00:11:18.896 read: IOPS=3069, BW=12.0MiB/s (12.6MB/s)(47.0MiB/3922msec) 00:11:18.896 slat (usec): min=4, max=25683, avg=17.98, stdev=293.28 00:11:18.896 clat (usec): min=206, max=67471, avg=302.73, stdev=1157.32 00:11:18.896 lat (usec): min=210, max=67483, avg=320.71, stdev=1197.52 00:11:18.896 clat percentiles (usec): 00:11:18.896 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 245], 00:11:18.896 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 277], 00:11:18.896 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 318], 95.00th=[ 338], 00:11:18.896 | 99.00th=[ 396], 99.50th=[ 465], 99.90th=[ 832], 99.95th=[40633], 00:11:18.896 | 99.99th=[41157] 00:11:18.896 bw ( KiB/s): min=11752, max=14368, per=46.07%, avg=13167.00, stdev=951.06, samples=7 00:11:18.896 iops : min= 2938, max= 3592, avg=3291.71, stdev=237.79, samples=7 00:11:18.897 lat (usec) : 250=31.55%, 500=68.13%, 750=0.20%, 1000=0.02% 00:11:18.897 lat (msec) : 4=0.02%, 50=0.06%, 100=0.01% 00:11:18.897 cpu : usr=2.22%, sys=4.57%, ctx=12047, majf=0, minf=1 00:11:18.897 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.897 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.897 issued rwts: total=12040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.897 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.897 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2885451: Sun Nov 17 02:31:27 2024 00:11:18.897 read: IOPS=42, BW=171KiB/s (175kB/s)(556KiB/3257msec) 00:11:18.897 slat (nsec): min=9854, max=55033, avg=20976.25, stdev=8015.09 00:11:18.897 clat (usec): min=303, max=42309, avg=23234.43, stdev=20281.64 00:11:18.897 lat (usec): min=318, max=42336, avg=23255.28, stdev=20278.83 00:11:18.897 clat percentiles (usec): 00:11:18.897 | 1.00th=[ 306], 5.00th=[ 322], 10.00th=[ 351], 20.00th=[ 367], 00:11:18.897 | 30.00th=[ 404], 40.00th=[ 461], 50.00th=[40633], 60.00th=[41157], 00:11:18.897 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:11:18.897 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:18.897 | 99.99th=[42206] 00:11:18.897 bw ( KiB/s): min= 96, max= 232, per=0.62%, avg=177.33, stdev=47.31, samples=6 00:11:18.897 iops : min= 24, max= 58, avg=44.33, stdev=11.83, samples=6 00:11:18.897 lat (usec) : 500=40.71%, 750=2.86% 00:11:18.897 lat (msec) : 50=55.71% 00:11:18.897 cpu : usr=0.06%, sys=0.12%, ctx=140, majf=0, minf=1 00:11:18.897 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.897 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.897 issued rwts: total=140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.897 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.897 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2885452: Sun Nov 17 02:31:27 2024 00:11:18.897 read: IOPS=1906, BW=7624KiB/s (7807kB/s)(21.8MiB/2934msec) 00:11:18.897 slat (nsec): min=4393, max=66949, avg=13816.63, stdev=7181.02 00:11:18.897 clat (usec): min=226, max=41996, avg=503.29, stdev=2673.20 00:11:18.897 lat (usec): min=232, max=42014, avg=517.11, stdev=2673.70 00:11:18.897 clat percentiles (usec): 00:11:18.897 | 1.00th=[ 239], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 262], 00:11:18.897 | 30.00th=[ 273], 40.00th=[ 293], 50.00th=[ 314], 60.00th=[ 343], 00:11:18.897 | 70.00th=[ 367], 80.00th=[ 383], 90.00th=[ 404], 95.00th=[ 465], 00:11:18.897 | 99.00th=[ 537], 99.50th=[ 693], 99.90th=[41681], 99.95th=[42206], 00:11:18.897 | 99.99th=[42206] 00:11:18.897 bw ( KiB/s): min= 3184, max=12952, per=24.19%, avg=6915.20, stdev=3889.41, samples=5 00:11:18.897 iops : min= 796, max= 3238, avg=1728.80, stdev=972.35, samples=5 00:11:18.897 lat (usec) : 250=7.28%, 500=90.31%, 750=1.91%, 1000=0.04% 00:11:18.897 lat (msec) : 20=0.02%, 50=0.43% 00:11:18.897 cpu : usr=1.19%, sys=3.75%, ctx=5593, majf=0, minf=2 00:11:18.897 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.897 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.897 issued rwts: total=5593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.897 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.897 00:11:18.897 Run status group 0 (all jobs): 00:11:18.897 READ: bw=27.9MiB/s (29.3MB/s), 171KiB/s-12.0MiB/s (175kB/s-12.6MB/s), io=109MiB (115MB), run=2934-3922msec 00:11:18.897 00:11:18.897 Disk stats (read/write): 00:11:18.897 nvme0n1: ios=9635/0, merge=0/0, ticks=3104/0, in_queue=3104, util=95.54% 00:11:18.897 nvme0n2: ios=12052/0, merge=0/0, ticks=3591/0, in_queue=3591, util=98.54% 00:11:18.897 nvme0n3: ios=166/0, merge=0/0, ticks=3181/0, in_queue=3181, util=98.78% 00:11:18.897 nvme0n4: ios=5424/0, merge=0/0, ticks=2846/0, in_queue=2846, util=98.88% 00:11:19.155 02:31:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:19.155 02:31:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:19.414 02:31:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:19.414 02:31:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:19.672 02:31:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:19.672 02:31:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:19.931 02:31:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:19.931 02:31:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:20.497 02:31:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:20.498 02:31:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2885325 00:11:20.498 02:31:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:20.498 02:31:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:21.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.432 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:21.432 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:21.432 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:21.432 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.432 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:21.432 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.432 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:21.432 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:21.432 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:21.432 nvmf hotplug test: fio failed as expected 00:11:21.432 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:21.432 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:21.432 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:21.432 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:21.432 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:21.432 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:21.432 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:21.432 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:21.432 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:21.432 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:21.432 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:21.432 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:21.432 rmmod nvme_tcp 00:11:21.432 rmmod nvme_fabrics 00:11:21.690 rmmod nvme_keyring 00:11:21.690 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:21.690 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:21.690 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:21.690 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2883184 ']' 00:11:21.690 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2883184 00:11:21.690 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2883184 ']' 00:11:21.690 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2883184 00:11:21.690 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:21.690 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.690 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2883184 00:11:21.690 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:21.690 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:21.690 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2883184' 00:11:21.690 killing process with pid 2883184 00:11:21.690 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2883184 00:11:21.690 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2883184 00:11:23.067 02:31:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:23.067 02:31:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:23.067 02:31:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:23.067 02:31:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:23.067 02:31:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:23.067 02:31:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:23.067 02:31:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:23.067 02:31:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:23.067 02:31:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:23.067 02:31:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.067 02:31:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.067 02:31:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:24.973 00:11:24.973 real 0m27.171s 00:11:24.973 user 1m35.643s 00:11:24.973 sys 0m7.627s 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.973 ************************************ 00:11:24.973 END TEST nvmf_fio_target 00:11:24.973 ************************************ 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:24.973 ************************************ 00:11:24.973 START TEST nvmf_bdevio 00:11:24.973 ************************************ 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:24.973 * Looking for test storage... 00:11:24.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:24.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.973 --rc genhtml_branch_coverage=1 00:11:24.973 --rc genhtml_function_coverage=1 00:11:24.973 --rc genhtml_legend=1 00:11:24.973 --rc geninfo_all_blocks=1 00:11:24.973 --rc geninfo_unexecuted_blocks=1 00:11:24.973 00:11:24.973 ' 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:24.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.973 --rc genhtml_branch_coverage=1 00:11:24.973 --rc genhtml_function_coverage=1 00:11:24.973 --rc genhtml_legend=1 00:11:24.973 --rc geninfo_all_blocks=1 00:11:24.973 --rc geninfo_unexecuted_blocks=1 00:11:24.973 00:11:24.973 ' 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:24.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.973 --rc genhtml_branch_coverage=1 00:11:24.973 --rc genhtml_function_coverage=1 00:11:24.973 --rc genhtml_legend=1 00:11:24.973 --rc geninfo_all_blocks=1 00:11:24.973 --rc geninfo_unexecuted_blocks=1 00:11:24.973 00:11:24.973 ' 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:24.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.973 --rc genhtml_branch_coverage=1 00:11:24.973 --rc genhtml_function_coverage=1 00:11:24.973 --rc genhtml_legend=1 00:11:24.973 --rc geninfo_all_blocks=1 00:11:24.973 --rc geninfo_unexecuted_blocks=1 00:11:24.973 00:11:24.973 ' 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.973 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:24.974 02:31:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:27.504 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:27.505 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:27.505 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:27.505 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:27.505 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:27.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:11:27.505 00:11:27.505 --- 10.0.0.2 ping statistics --- 00:11:27.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.505 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:27.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:11:27.505 00:11:27.505 --- 10.0.0.1 ping statistics --- 00:11:27.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.505 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2888352 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2888352 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2888352 ']' 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.505 02:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.505 [2024-11-17 02:31:35.786403] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:27.505 [2024-11-17 02:31:35.786555] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.505 [2024-11-17 02:31:35.941647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:27.764 [2024-11-17 02:31:36.087901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.764 [2024-11-17 02:31:36.088004] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.764 [2024-11-17 02:31:36.088030] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.764 [2024-11-17 02:31:36.088056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.764 [2024-11-17 02:31:36.088077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.764 [2024-11-17 02:31:36.091063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:27.764 [2024-11-17 02:31:36.091158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:27.764 [2024-11-17 02:31:36.091215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.764 [2024-11-17 02:31:36.091219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:28.331 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.331 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:28.331 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:28.331 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:28.331 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.331 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.331 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.331 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.331 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.331 [2024-11-17 02:31:36.756672] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.331 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.331 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:28.331 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.331 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.589 Malloc0 00:11:28.589 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.589 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:28.589 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.589 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.589 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.589 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:28.589 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.589 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.589 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.589 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.589 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.589 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.589 [2024-11-17 02:31:36.876348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.589 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.589 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:28.589 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:28.589 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:28.589 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:28.590 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:28.590 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:28.590 { 00:11:28.590 "params": { 00:11:28.590 "name": "Nvme$subsystem", 00:11:28.590 "trtype": "$TEST_TRANSPORT", 00:11:28.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:28.590 "adrfam": "ipv4", 00:11:28.590 "trsvcid": "$NVMF_PORT", 00:11:28.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:28.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:28.590 "hdgst": ${hdgst:-false}, 00:11:28.590 "ddgst": ${ddgst:-false} 00:11:28.590 }, 00:11:28.590 "method": "bdev_nvme_attach_controller" 00:11:28.590 } 00:11:28.590 EOF 00:11:28.590 )") 00:11:28.590 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:28.590 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:28.590 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:28.590 02:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:28.590 "params": { 00:11:28.590 "name": "Nvme1", 00:11:28.590 "trtype": "tcp", 00:11:28.590 "traddr": "10.0.0.2", 00:11:28.590 "adrfam": "ipv4", 00:11:28.590 "trsvcid": "4420", 00:11:28.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:28.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:28.590 "hdgst": false, 00:11:28.590 "ddgst": false 00:11:28.590 }, 00:11:28.590 "method": "bdev_nvme_attach_controller" 00:11:28.590 }' 00:11:28.590 [2024-11-17 02:31:36.962702] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:28.590 [2024-11-17 02:31:36.962878] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888516 ] 00:11:28.848 [2024-11-17 02:31:37.102761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:28.848 [2024-11-17 02:31:37.237840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.848 [2024-11-17 02:31:37.237890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.848 [2024-11-17 02:31:37.237895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.415 I/O targets: 00:11:29.415 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:29.415 00:11:29.415 00:11:29.415 CUnit - A unit testing framework for C - Version 2.1-3 00:11:29.415 http://cunit.sourceforge.net/ 00:11:29.415 00:11:29.415 00:11:29.415 Suite: bdevio tests on: Nvme1n1 00:11:29.415 Test: blockdev write read block ...passed 00:11:29.673 Test: blockdev write zeroes read block ...passed 00:11:29.673 Test: blockdev write zeroes read no split ...passed 00:11:29.673 Test: blockdev write zeroes read split ...passed 00:11:29.673 Test: blockdev write zeroes read split partial ...passed 00:11:29.673 Test: blockdev reset ...[2024-11-17 02:31:37.988494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:29.673 [2024-11-17 02:31:37.988688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:11:29.673 [2024-11-17 02:31:38.050772] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:29.673 passed 00:11:29.673 Test: blockdev write read 8 blocks ...passed 00:11:29.673 Test: blockdev write read size > 128k ...passed 00:11:29.673 Test: blockdev write read invalid size ...passed 00:11:29.932 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:29.932 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:29.932 Test: blockdev write read max offset ...passed 00:11:29.932 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:29.932 Test: blockdev writev readv 8 blocks ...passed 00:11:29.932 Test: blockdev writev readv 30 x 1block ...passed 00:11:29.932 Test: blockdev writev readv block ...passed 00:11:29.932 Test: blockdev writev readv size > 128k ...passed 00:11:29.932 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:29.932 Test: blockdev comparev and writev ...[2024-11-17 02:31:38.312659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.932 [2024-11-17 02:31:38.312738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:29.932 [2024-11-17 02:31:38.312779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.932 [2024-11-17 02:31:38.312806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:29.932 [2024-11-17 02:31:38.313403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.932 [2024-11-17 02:31:38.313437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:29.932 [2024-11-17 02:31:38.313471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.932 [2024-11-17 02:31:38.313496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:29.932 [2024-11-17 02:31:38.313997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.932 [2024-11-17 02:31:38.314030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:29.932 [2024-11-17 02:31:38.314063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.932 [2024-11-17 02:31:38.314088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:29.932 [2024-11-17 02:31:38.314572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.932 [2024-11-17 02:31:38.314605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:29.932 [2024-11-17 02:31:38.314638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.932 [2024-11-17 02:31:38.314662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:29.932 passed 00:11:30.191 Test: blockdev nvme passthru rw ...passed 00:11:30.191 Test: blockdev nvme passthru vendor specific ...[2024-11-17 02:31:38.398618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:30.191 [2024-11-17 02:31:38.398678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:30.191 [2024-11-17 02:31:38.398950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:30.191 [2024-11-17 02:31:38.398983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:30.191 [2024-11-17 02:31:38.399199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:30.191 [2024-11-17 02:31:38.399231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:30.191 [2024-11-17 02:31:38.399441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:30.191 [2024-11-17 02:31:38.399473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:30.191 passed 00:11:30.191 Test: blockdev nvme admin passthru ...passed 00:11:30.191 Test: blockdev copy ...passed 00:11:30.191 00:11:30.191 Run Summary: Type Total Ran Passed Failed Inactive 00:11:30.191 suites 1 1 n/a 0 0 00:11:30.191 tests 23 23 23 0 0 00:11:30.191 asserts 152 152 152 0 n/a 00:11:30.191 00:11:30.191 Elapsed time = 1.296 seconds 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:31.127 rmmod nvme_tcp 00:11:31.127 rmmod nvme_fabrics 00:11:31.127 rmmod nvme_keyring 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2888352 ']' 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2888352 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2888352 ']' 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2888352 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2888352 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2888352' 00:11:31.127 killing process with pid 2888352 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2888352 00:11:31.127 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2888352 00:11:32.508 02:31:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:32.508 02:31:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:32.508 02:31:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:32.508 02:31:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:32.508 02:31:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:32.508 02:31:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:32.508 02:31:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:32.508 02:31:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:32.508 02:31:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:32.508 02:31:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.508 02:31:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.508 02:31:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.413 02:31:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:34.413 00:11:34.413 real 0m9.525s 00:11:34.413 user 0m22.939s 00:11:34.413 sys 0m2.516s 00:11:34.413 02:31:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.413 02:31:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:34.413 ************************************ 00:11:34.413 END TEST nvmf_bdevio 00:11:34.413 ************************************ 00:11:34.413 02:31:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:34.413 00:11:34.413 real 4m31.402s 00:11:34.413 user 11m55.899s 00:11:34.413 sys 1m9.509s 00:11:34.413 02:31:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.413 02:31:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:34.413 ************************************ 00:11:34.413 END TEST nvmf_target_core 00:11:34.413 ************************************ 00:11:34.413 02:31:42 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:34.413 02:31:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.413 02:31:42 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.413 02:31:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:34.413 ************************************ 00:11:34.413 START TEST nvmf_target_extra 00:11:34.413 ************************************ 00:11:34.413 02:31:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:34.672 * Looking for test storage... 00:11:34.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:34.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.672 --rc genhtml_branch_coverage=1 00:11:34.672 --rc genhtml_function_coverage=1 00:11:34.672 --rc genhtml_legend=1 00:11:34.672 --rc geninfo_all_blocks=1 00:11:34.672 --rc geninfo_unexecuted_blocks=1 00:11:34.672 00:11:34.672 ' 00:11:34.672 02:31:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:34.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.673 --rc genhtml_branch_coverage=1 00:11:34.673 --rc genhtml_function_coverage=1 00:11:34.673 --rc genhtml_legend=1 00:11:34.673 --rc geninfo_all_blocks=1 00:11:34.673 --rc geninfo_unexecuted_blocks=1 00:11:34.673 00:11:34.673 ' 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:34.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.673 --rc genhtml_branch_coverage=1 00:11:34.673 --rc genhtml_function_coverage=1 00:11:34.673 --rc genhtml_legend=1 00:11:34.673 --rc geninfo_all_blocks=1 00:11:34.673 --rc geninfo_unexecuted_blocks=1 00:11:34.673 00:11:34.673 ' 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:34.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.673 --rc genhtml_branch_coverage=1 00:11:34.673 --rc genhtml_function_coverage=1 00:11:34.673 --rc genhtml_legend=1 00:11:34.673 --rc geninfo_all_blocks=1 00:11:34.673 --rc geninfo_unexecuted_blocks=1 00:11:34.673 00:11:34.673 ' 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.673 02:31:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:34.673 ************************************ 00:11:34.673 START TEST nvmf_example 00:11:34.673 ************************************ 00:11:34.673 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:34.673 * Looking for test storage... 00:11:34.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.673 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:34.673 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:34.673 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:34.933 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:34.933 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.933 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.933 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.933 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.933 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.933 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.933 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.933 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.933 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.933 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.933 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.933 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:34.933 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:34.933 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.933 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.933 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:34.933 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:34.933 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.933 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:34.933 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.933 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:34.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.934 --rc genhtml_branch_coverage=1 00:11:34.934 --rc genhtml_function_coverage=1 00:11:34.934 --rc genhtml_legend=1 00:11:34.934 --rc geninfo_all_blocks=1 00:11:34.934 --rc geninfo_unexecuted_blocks=1 00:11:34.934 00:11:34.934 ' 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:34.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.934 --rc genhtml_branch_coverage=1 00:11:34.934 --rc genhtml_function_coverage=1 00:11:34.934 --rc genhtml_legend=1 00:11:34.934 --rc geninfo_all_blocks=1 00:11:34.934 --rc geninfo_unexecuted_blocks=1 00:11:34.934 00:11:34.934 ' 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:34.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.934 --rc genhtml_branch_coverage=1 00:11:34.934 --rc genhtml_function_coverage=1 00:11:34.934 --rc genhtml_legend=1 00:11:34.934 --rc geninfo_all_blocks=1 00:11:34.934 --rc geninfo_unexecuted_blocks=1 00:11:34.934 00:11:34.934 ' 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:34.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.934 --rc genhtml_branch_coverage=1 00:11:34.934 --rc genhtml_function_coverage=1 00:11:34.934 --rc genhtml_legend=1 00:11:34.934 --rc geninfo_all_blocks=1 00:11:34.934 --rc geninfo_unexecuted_blocks=1 00:11:34.934 00:11:34.934 ' 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.934 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.934 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:34.935 02:31:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:36.837 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.837 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:36.837 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:36.838 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:36.838 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:36.838 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:37.096 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:37.096 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:37.096 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:37.096 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:37.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:11:37.096 00:11:37.096 --- 10.0.0.2 ping statistics --- 00:11:37.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.096 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:11:37.096 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:37.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:11:37.096 00:11:37.096 --- 10.0.0.1 ping statistics --- 00:11:37.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.096 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:11:37.096 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.096 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:37.096 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:37.096 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.096 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:37.096 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:37.096 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.096 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:37.097 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:37.097 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:37.097 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:37.097 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:37.097 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:37.097 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:37.097 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:37.097 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2890921 00:11:37.097 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:37.097 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2890921 00:11:37.097 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:37.097 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2890921 ']' 00:11:37.097 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.097 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.097 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.097 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.097 02:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.032 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:38.032 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:38.032 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:38.032 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:38.032 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.032 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:38.032 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.032 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.032 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.032 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:38.032 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.032 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.291 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.291 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:38.291 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:38.291 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.291 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.291 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.291 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:38.291 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:38.291 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.291 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.291 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.291 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.291 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.291 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.291 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.291 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:38.291 02:31:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:50.492 Initializing NVMe Controllers 00:11:50.492 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:50.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:50.492 Initialization complete. Launching workers. 00:11:50.492 ======================================================== 00:11:50.492 Latency(us) 00:11:50.492 Device Information : IOPS MiB/s Average min max 00:11:50.492 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11997.10 46.86 5336.28 1309.82 15854.76 00:11:50.492 ======================================================== 00:11:50.492 Total : 11997.10 46.86 5336.28 1309.82 15854.76 00:11:50.492 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:50.492 rmmod nvme_tcp 00:11:50.492 rmmod nvme_fabrics 00:11:50.492 rmmod nvme_keyring 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2890921 ']' 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2890921 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2890921 ']' 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2890921 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2890921 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2890921' 00:11:50.492 killing process with pid 2890921 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2890921 00:11:50.492 02:31:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2890921 00:11:50.492 nvmf threads initialize successfully 00:11:50.492 bdev subsystem init successfully 00:11:50.492 created a nvmf target service 00:11:50.492 create targets's poll groups done 00:11:50.492 all subsystems of target started 00:11:50.492 nvmf target is running 00:11:50.492 all subsystems of target stopped 00:11:50.492 destroy targets's poll groups done 00:11:50.492 destroyed the nvmf target service 00:11:50.492 bdev subsystem finish successfully 00:11:50.493 nvmf threads destroy successfully 00:11:50.493 02:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:50.493 02:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:50.493 02:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:50.493 02:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:50.493 02:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:50.493 02:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:50.493 02:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:50.493 02:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:50.493 02:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:50.493 02:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.493 02:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.493 02:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.870 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:51.870 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:51.870 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:51.870 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:51.870 00:11:51.870 real 0m17.232s 00:11:51.870 user 0m48.992s 00:11:51.870 sys 0m3.154s 00:11:51.870 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.870 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:51.870 ************************************ 00:11:51.870 END TEST nvmf_example 00:11:51.870 ************************************ 00:11:51.870 02:32:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:51.870 02:32:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:51.870 02:32:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.870 02:32:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:51.870 ************************************ 00:11:51.870 START TEST nvmf_filesystem 00:11:51.870 ************************************ 00:11:51.870 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:52.131 * Looking for test storage... 00:11:52.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.131 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:52.131 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:52.131 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:52.131 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:52.131 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:52.131 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:52.131 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:52.131 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.131 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:52.131 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:52.131 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:52.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.132 --rc genhtml_branch_coverage=1 00:11:52.132 --rc genhtml_function_coverage=1 00:11:52.132 --rc genhtml_legend=1 00:11:52.132 --rc geninfo_all_blocks=1 00:11:52.132 --rc geninfo_unexecuted_blocks=1 00:11:52.132 00:11:52.132 ' 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:52.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.132 --rc genhtml_branch_coverage=1 00:11:52.132 --rc genhtml_function_coverage=1 00:11:52.132 --rc genhtml_legend=1 00:11:52.132 --rc geninfo_all_blocks=1 00:11:52.132 --rc geninfo_unexecuted_blocks=1 00:11:52.132 00:11:52.132 ' 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:52.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.132 --rc genhtml_branch_coverage=1 00:11:52.132 --rc genhtml_function_coverage=1 00:11:52.132 --rc genhtml_legend=1 00:11:52.132 --rc geninfo_all_blocks=1 00:11:52.132 --rc geninfo_unexecuted_blocks=1 00:11:52.132 00:11:52.132 ' 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:52.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.132 --rc genhtml_branch_coverage=1 00:11:52.132 --rc genhtml_function_coverage=1 00:11:52.132 --rc genhtml_legend=1 00:11:52.132 --rc geninfo_all_blocks=1 00:11:52.132 --rc geninfo_unexecuted_blocks=1 00:11:52.132 00:11:52.132 ' 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:52.132 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:52.133 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:52.133 #define SPDK_CONFIG_H 00:11:52.133 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:52.133 #define SPDK_CONFIG_APPS 1 00:11:52.133 #define SPDK_CONFIG_ARCH native 00:11:52.133 #define SPDK_CONFIG_ASAN 1 00:11:52.133 #undef SPDK_CONFIG_AVAHI 00:11:52.133 #undef SPDK_CONFIG_CET 00:11:52.133 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:52.133 #define SPDK_CONFIG_COVERAGE 1 00:11:52.133 #define SPDK_CONFIG_CROSS_PREFIX 00:11:52.133 #undef SPDK_CONFIG_CRYPTO 00:11:52.133 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:52.133 #undef SPDK_CONFIG_CUSTOMOCF 00:11:52.133 #undef SPDK_CONFIG_DAOS 00:11:52.133 #define SPDK_CONFIG_DAOS_DIR 00:11:52.133 #define SPDK_CONFIG_DEBUG 1 00:11:52.133 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:52.133 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:52.133 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:52.133 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:52.133 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:52.133 #undef SPDK_CONFIG_DPDK_UADK 00:11:52.133 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:52.133 #define SPDK_CONFIG_EXAMPLES 1 00:11:52.133 #undef SPDK_CONFIG_FC 00:11:52.133 #define SPDK_CONFIG_FC_PATH 00:11:52.133 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:52.133 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:52.133 #define SPDK_CONFIG_FSDEV 1 00:11:52.133 #undef SPDK_CONFIG_FUSE 00:11:52.133 #undef SPDK_CONFIG_FUZZER 00:11:52.133 #define SPDK_CONFIG_FUZZER_LIB 00:11:52.133 #undef SPDK_CONFIG_GOLANG 00:11:52.133 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:52.133 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:52.133 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:52.133 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:52.133 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:52.133 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:52.133 #undef SPDK_CONFIG_HAVE_LZ4 00:11:52.133 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:52.133 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:52.133 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:52.133 #define SPDK_CONFIG_IDXD 1 00:11:52.133 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:52.133 #undef SPDK_CONFIG_IPSEC_MB 00:11:52.133 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:52.133 #define SPDK_CONFIG_ISAL 1 00:11:52.133 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:52.133 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:52.133 #define SPDK_CONFIG_LIBDIR 00:11:52.133 #undef SPDK_CONFIG_LTO 00:11:52.133 #define SPDK_CONFIG_MAX_LCORES 128 00:11:52.133 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:52.133 #define SPDK_CONFIG_NVME_CUSE 1 00:11:52.133 #undef SPDK_CONFIG_OCF 00:11:52.133 #define SPDK_CONFIG_OCF_PATH 00:11:52.133 #define SPDK_CONFIG_OPENSSL_PATH 00:11:52.133 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:52.133 #define SPDK_CONFIG_PGO_DIR 00:11:52.134 #undef SPDK_CONFIG_PGO_USE 00:11:52.134 #define SPDK_CONFIG_PREFIX /usr/local 00:11:52.134 #undef SPDK_CONFIG_RAID5F 00:11:52.134 #undef SPDK_CONFIG_RBD 00:11:52.134 #define SPDK_CONFIG_RDMA 1 00:11:52.134 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:52.134 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:52.134 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:52.134 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:52.134 #define SPDK_CONFIG_SHARED 1 00:11:52.134 #undef SPDK_CONFIG_SMA 00:11:52.134 #define SPDK_CONFIG_TESTS 1 00:11:52.134 #undef SPDK_CONFIG_TSAN 00:11:52.134 #define SPDK_CONFIG_UBLK 1 00:11:52.134 #define SPDK_CONFIG_UBSAN 1 00:11:52.134 #undef SPDK_CONFIG_UNIT_TESTS 00:11:52.134 #undef SPDK_CONFIG_URING 00:11:52.134 #define SPDK_CONFIG_URING_PATH 00:11:52.134 #undef SPDK_CONFIG_URING_ZNS 00:11:52.134 #undef SPDK_CONFIG_USDT 00:11:52.134 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:52.134 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:52.134 #undef SPDK_CONFIG_VFIO_USER 00:11:52.134 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:52.134 #define SPDK_CONFIG_VHOST 1 00:11:52.134 #define SPDK_CONFIG_VIRTIO 1 00:11:52.134 #undef SPDK_CONFIG_VTUNE 00:11:52.134 #define SPDK_CONFIG_VTUNE_DIR 00:11:52.134 #define SPDK_CONFIG_WERROR 1 00:11:52.134 #define SPDK_CONFIG_WPDK_DIR 00:11:52.134 #undef SPDK_CONFIG_XNVME 00:11:52.134 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:52.134 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:52.135 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:52.136 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2892870 ]] 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2892870 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.NO5OiB 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.NO5OiB/tests/target /tmp/spdk.NO5OiB 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=55071059968 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988532224 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6917472256 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30982897664 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375269376 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22437888 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993739776 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994268160 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=528384 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:11:52.137 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:52.138 * Looking for test storage... 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=55071059968 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9132064768 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:52.138 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.397 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:52.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.398 --rc genhtml_branch_coverage=1 00:11:52.398 --rc genhtml_function_coverage=1 00:11:52.398 --rc genhtml_legend=1 00:11:52.398 --rc geninfo_all_blocks=1 00:11:52.398 --rc geninfo_unexecuted_blocks=1 00:11:52.398 00:11:52.398 ' 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:52.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.398 --rc genhtml_branch_coverage=1 00:11:52.398 --rc genhtml_function_coverage=1 00:11:52.398 --rc genhtml_legend=1 00:11:52.398 --rc geninfo_all_blocks=1 00:11:52.398 --rc geninfo_unexecuted_blocks=1 00:11:52.398 00:11:52.398 ' 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:52.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.398 --rc genhtml_branch_coverage=1 00:11:52.398 --rc genhtml_function_coverage=1 00:11:52.398 --rc genhtml_legend=1 00:11:52.398 --rc geninfo_all_blocks=1 00:11:52.398 --rc geninfo_unexecuted_blocks=1 00:11:52.398 00:11:52.398 ' 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:52.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.398 --rc genhtml_branch_coverage=1 00:11:52.398 --rc genhtml_function_coverage=1 00:11:52.398 --rc genhtml_legend=1 00:11:52.398 --rc geninfo_all_blocks=1 00:11:52.398 --rc geninfo_unexecuted_blocks=1 00:11:52.398 00:11:52.398 ' 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:52.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:52.398 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.399 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.399 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.399 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:52.399 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:52.399 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:52.399 02:32:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:54.360 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:54.360 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:54.360 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:54.361 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:54.361 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:54.361 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:54.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:11:54.643 00:11:54.643 --- 10.0.0.2 ping statistics --- 00:11:54.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.643 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:54.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:11:54.643 00:11:54.643 --- 10.0.0.1 ping statistics --- 00:11:54.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.643 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:54.643 ************************************ 00:11:54.643 START TEST nvmf_filesystem_no_in_capsule 00:11:54.643 ************************************ 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2894513 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2894513 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2894513 ']' 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:54.643 02:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.643 [2024-11-17 02:32:03.048479] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:54.643 [2024-11-17 02:32:03.048638] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.902 [2024-11-17 02:32:03.197629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:54.902 [2024-11-17 02:32:03.337445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:54.902 [2024-11-17 02:32:03.337524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:54.902 [2024-11-17 02:32:03.337550] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:54.902 [2024-11-17 02:32:03.337574] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:54.902 [2024-11-17 02:32:03.337599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:54.902 [2024-11-17 02:32:03.340333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.902 [2024-11-17 02:32:03.340404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.902 [2024-11-17 02:32:03.340499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.902 [2024-11-17 02:32:03.340504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.838 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:55.838 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:55.838 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:55.838 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:55.838 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.838 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.838 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:55.838 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:55.838 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.838 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.838 [2024-11-17 02:32:04.090564] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:55.838 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.838 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:55.838 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.838 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.405 Malloc1 00:11:56.405 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.405 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:56.405 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.405 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.405 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.405 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:56.405 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.405 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.405 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.405 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.405 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.405 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.405 [2024-11-17 02:32:04.692167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.405 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.405 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:56.405 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:56.406 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:56.406 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:56.406 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:56.406 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:56.406 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.406 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.406 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.406 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:56.406 { 00:11:56.406 "name": "Malloc1", 00:11:56.406 "aliases": [ 00:11:56.406 "2e5a387c-298f-4c87-b9a6-122dd8e9eaba" 00:11:56.406 ], 00:11:56.406 "product_name": "Malloc disk", 00:11:56.406 "block_size": 512, 00:11:56.406 "num_blocks": 1048576, 00:11:56.406 "uuid": "2e5a387c-298f-4c87-b9a6-122dd8e9eaba", 00:11:56.406 "assigned_rate_limits": { 00:11:56.406 "rw_ios_per_sec": 0, 00:11:56.406 "rw_mbytes_per_sec": 0, 00:11:56.406 "r_mbytes_per_sec": 0, 00:11:56.406 "w_mbytes_per_sec": 0 00:11:56.406 }, 00:11:56.406 "claimed": true, 00:11:56.406 "claim_type": "exclusive_write", 00:11:56.406 "zoned": false, 00:11:56.406 "supported_io_types": { 00:11:56.406 "read": true, 00:11:56.406 "write": true, 00:11:56.406 "unmap": true, 00:11:56.406 "flush": true, 00:11:56.406 "reset": true, 00:11:56.406 "nvme_admin": false, 00:11:56.406 "nvme_io": false, 00:11:56.406 "nvme_io_md": false, 00:11:56.406 "write_zeroes": true, 00:11:56.406 "zcopy": true, 00:11:56.406 "get_zone_info": false, 00:11:56.406 "zone_management": false, 00:11:56.406 "zone_append": false, 00:11:56.406 "compare": false, 00:11:56.406 "compare_and_write": false, 00:11:56.406 "abort": true, 00:11:56.406 "seek_hole": false, 00:11:56.406 "seek_data": false, 00:11:56.406 "copy": true, 00:11:56.406 "nvme_iov_md": false 00:11:56.406 }, 00:11:56.406 "memory_domains": [ 00:11:56.406 { 00:11:56.406 "dma_device_id": "system", 00:11:56.406 "dma_device_type": 1 00:11:56.406 }, 00:11:56.406 { 00:11:56.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.406 "dma_device_type": 2 00:11:56.406 } 00:11:56.406 ], 00:11:56.406 "driver_specific": {} 00:11:56.406 } 00:11:56.406 ]' 00:11:56.406 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:56.406 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:56.406 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:56.406 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:56.406 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:56.406 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:56.406 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:56.406 02:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:57.342 02:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.342 02:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:57.342 02:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.342 02:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:57.342 02:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:59.241 02:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:59.241 02:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:59.241 02:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.241 02:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:59.241 02:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.241 02:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:59.241 02:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:59.241 02:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:59.241 02:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:59.241 02:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:59.241 02:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:59.241 02:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:59.241 02:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:59.241 02:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:59.241 02:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:59.241 02:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:59.241 02:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:59.498 02:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:00.064 02:32:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:01.436 02:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:01.437 02:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:01.437 02:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:01.437 02:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.437 02:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.437 ************************************ 00:12:01.437 START TEST filesystem_ext4 00:12:01.437 ************************************ 00:12:01.437 02:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:01.437 02:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:01.437 02:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:01.437 02:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:01.437 02:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:01.437 02:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:01.437 02:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:01.437 02:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:01.437 02:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:01.437 02:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:01.437 02:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:01.437 mke2fs 1.47.0 (5-Feb-2023) 00:12:01.437 Discarding device blocks: 0/522240 done 00:12:01.437 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:01.437 Filesystem UUID: 39d0803b-a0b5-47f1-bb1e-d93317f7f6a7 00:12:01.437 Superblock backups stored on blocks: 00:12:01.437 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:01.437 00:12:01.437 Allocating group tables: 0/64 done 00:12:01.437 Writing inode tables: 0/64 done 00:12:01.437 Creating journal (8192 blocks): done 00:12:02.568 Writing superblocks and filesystem accounting information: 0/64 done 00:12:02.568 00:12:02.568 02:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:02.568 02:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2894513 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:07.830 00:12:07.830 real 0m6.684s 00:12:07.830 user 0m0.020s 00:12:07.830 sys 0m0.069s 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:07.830 ************************************ 00:12:07.830 END TEST filesystem_ext4 00:12:07.830 ************************************ 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.830 ************************************ 00:12:07.830 START TEST filesystem_btrfs 00:12:07.830 ************************************ 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:07.830 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:07.831 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:07.831 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:07.831 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:08.089 btrfs-progs v6.8.1 00:12:08.089 See https://btrfs.readthedocs.io for more information. 00:12:08.089 00:12:08.089 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:08.089 NOTE: several default settings have changed in version 5.15, please make sure 00:12:08.089 this does not affect your deployments: 00:12:08.089 - DUP for metadata (-m dup) 00:12:08.089 - enabled no-holes (-O no-holes) 00:12:08.089 - enabled free-space-tree (-R free-space-tree) 00:12:08.089 00:12:08.089 Label: (null) 00:12:08.089 UUID: b1a5d1d5-4132-4011-b527-086bcad9bb24 00:12:08.089 Node size: 16384 00:12:08.089 Sector size: 4096 (CPU page size: 4096) 00:12:08.089 Filesystem size: 510.00MiB 00:12:08.089 Block group profiles: 00:12:08.089 Data: single 8.00MiB 00:12:08.089 Metadata: DUP 32.00MiB 00:12:08.089 System: DUP 8.00MiB 00:12:08.089 SSD detected: yes 00:12:08.089 Zoned device: no 00:12:08.089 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:08.089 Checksum: crc32c 00:12:08.089 Number of devices: 1 00:12:08.089 Devices: 00:12:08.089 ID SIZE PATH 00:12:08.089 1 510.00MiB /dev/nvme0n1p1 00:12:08.089 00:12:08.089 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:08.089 02:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:09.022 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:09.022 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:09.023 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:09.023 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:09.023 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:09.023 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:09.281 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2894513 00:12:09.281 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:09.281 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:09.281 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:09.281 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:09.281 00:12:09.281 real 0m1.242s 00:12:09.281 user 0m0.024s 00:12:09.281 sys 0m0.095s 00:12:09.281 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.281 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:09.281 ************************************ 00:12:09.281 END TEST filesystem_btrfs 00:12:09.281 ************************************ 00:12:09.281 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:09.281 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:09.281 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.281 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.281 ************************************ 00:12:09.281 START TEST filesystem_xfs 00:12:09.281 ************************************ 00:12:09.281 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:09.281 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:09.281 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:09.281 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:09.281 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:09.281 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:09.281 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:09.281 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:09.281 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:09.281 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:09.281 02:32:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:09.281 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:09.281 = sectsz=512 attr=2, projid32bit=1 00:12:09.281 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:09.281 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:09.282 data = bsize=4096 blocks=130560, imaxpct=25 00:12:09.282 = sunit=0 swidth=0 blks 00:12:09.282 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:09.282 log =internal log bsize=4096 blocks=16384, version=2 00:12:09.282 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:09.282 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:10.216 Discarding blocks...Done. 00:12:10.216 02:32:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:10.216 02:32:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:12.744 02:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:12.744 02:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:12.744 02:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:12.744 02:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:12.744 02:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:12.744 02:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:12.744 02:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2894513 00:12:12.744 02:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:12.744 02:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:12.744 02:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:12.744 02:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:12.744 00:12:12.744 real 0m3.361s 00:12:12.744 user 0m0.015s 00:12:12.744 sys 0m0.066s 00:12:12.744 02:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.744 02:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:12.744 ************************************ 00:12:12.744 END TEST filesystem_xfs 00:12:12.744 ************************************ 00:12:12.744 02:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:12.744 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:12.744 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:13.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2894513 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2894513 ']' 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2894513 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2894513 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2894513' 00:12:13.004 killing process with pid 2894513 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2894513 00:12:13.004 02:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2894513 00:12:15.538 02:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:15.538 00:12:15.538 real 0m20.794s 00:12:15.538 user 1m18.749s 00:12:15.538 sys 0m2.675s 00:12:15.538 02:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.538 02:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.538 ************************************ 00:12:15.538 END TEST nvmf_filesystem_no_in_capsule 00:12:15.538 ************************************ 00:12:15.538 02:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:15.538 02:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:15.538 02:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.538 02:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:15.538 ************************************ 00:12:15.538 START TEST nvmf_filesystem_in_capsule 00:12:15.538 ************************************ 00:12:15.538 02:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:15.538 02:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:15.538 02:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:15.538 02:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:15.538 02:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.538 02:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.538 02:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2897143 00:12:15.538 02:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.538 02:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2897143 00:12:15.538 02:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2897143 ']' 00:12:15.538 02:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.538 02:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.538 02:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.538 02:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.538 02:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.538 [2024-11-17 02:32:23.894305] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:15.538 [2024-11-17 02:32:23.894452] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.797 [2024-11-17 02:32:24.039611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.797 [2024-11-17 02:32:24.162612] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.797 [2024-11-17 02:32:24.162696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.797 [2024-11-17 02:32:24.162718] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.797 [2024-11-17 02:32:24.162739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.797 [2024-11-17 02:32:24.162755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.797 [2024-11-17 02:32:24.165222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.797 [2024-11-17 02:32:24.165261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.797 [2024-11-17 02:32:24.165305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.797 [2024-11-17 02:32:24.165310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.732 02:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.733 02:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:16.733 02:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:16.733 02:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:16.733 02:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.733 02:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.733 02:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:16.733 02:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:16.733 02:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.733 02:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.733 [2024-11-17 02:32:24.916969] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.733 02:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.733 02:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:16.733 02:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.733 02:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.299 Malloc1 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.299 [2024-11-17 02:32:25.516608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.299 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:17.299 { 00:12:17.299 "name": "Malloc1", 00:12:17.299 "aliases": [ 00:12:17.299 "e30063d4-7efe-4684-ba59-bd7543942395" 00:12:17.299 ], 00:12:17.299 "product_name": "Malloc disk", 00:12:17.299 "block_size": 512, 00:12:17.299 "num_blocks": 1048576, 00:12:17.299 "uuid": "e30063d4-7efe-4684-ba59-bd7543942395", 00:12:17.299 "assigned_rate_limits": { 00:12:17.299 "rw_ios_per_sec": 0, 00:12:17.299 "rw_mbytes_per_sec": 0, 00:12:17.299 "r_mbytes_per_sec": 0, 00:12:17.299 "w_mbytes_per_sec": 0 00:12:17.299 }, 00:12:17.300 "claimed": true, 00:12:17.300 "claim_type": "exclusive_write", 00:12:17.300 "zoned": false, 00:12:17.300 "supported_io_types": { 00:12:17.300 "read": true, 00:12:17.300 "write": true, 00:12:17.300 "unmap": true, 00:12:17.300 "flush": true, 00:12:17.300 "reset": true, 00:12:17.300 "nvme_admin": false, 00:12:17.300 "nvme_io": false, 00:12:17.300 "nvme_io_md": false, 00:12:17.300 "write_zeroes": true, 00:12:17.300 "zcopy": true, 00:12:17.300 "get_zone_info": false, 00:12:17.300 "zone_management": false, 00:12:17.300 "zone_append": false, 00:12:17.300 "compare": false, 00:12:17.300 "compare_and_write": false, 00:12:17.300 "abort": true, 00:12:17.300 "seek_hole": false, 00:12:17.300 "seek_data": false, 00:12:17.300 "copy": true, 00:12:17.300 "nvme_iov_md": false 00:12:17.300 }, 00:12:17.300 "memory_domains": [ 00:12:17.300 { 00:12:17.300 "dma_device_id": "system", 00:12:17.300 "dma_device_type": 1 00:12:17.300 }, 00:12:17.300 { 00:12:17.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.300 "dma_device_type": 2 00:12:17.300 } 00:12:17.300 ], 00:12:17.300 "driver_specific": {} 00:12:17.300 } 00:12:17.300 ]' 00:12:17.300 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:17.300 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:17.300 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:17.300 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:17.300 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:17.300 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:17.300 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:17.300 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:17.867 02:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:17.867 02:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:17.867 02:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:17.867 02:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:17.867 02:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:20.395 02:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:20.395 02:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:20.396 02:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:20.396 02:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:20.396 02:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.396 02:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:20.396 02:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:20.396 02:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:20.396 02:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:20.396 02:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:20.396 02:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:20.396 02:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:20.396 02:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:20.396 02:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:20.396 02:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:20.396 02:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:20.396 02:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:20.396 02:32:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:20.654 02:32:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:22.029 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:22.029 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:22.029 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:22.029 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.029 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.029 ************************************ 00:12:22.029 START TEST filesystem_in_capsule_ext4 00:12:22.029 ************************************ 00:12:22.029 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:22.029 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:22.029 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:22.029 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:22.029 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:22.029 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:22.029 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:22.029 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:22.029 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:22.029 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:22.029 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:22.029 mke2fs 1.47.0 (5-Feb-2023) 00:12:22.029 Discarding device blocks: 0/522240 done 00:12:22.029 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:22.029 Filesystem UUID: 403de15d-5a5f-4251-aa70-5df52a4f5845 00:12:22.029 Superblock backups stored on blocks: 00:12:22.029 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:22.029 00:12:22.029 Allocating group tables: 0/64 done 00:12:22.029 Writing inode tables: 0/64 done 00:12:22.029 Creating journal (8192 blocks): done 00:12:23.219 Writing superblocks and filesystem accounting information: 0/64 done 00:12:23.219 00:12:23.219 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:23.219 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2897143 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:29.870 00:12:29.870 real 0m7.168s 00:12:29.870 user 0m0.018s 00:12:29.870 sys 0m0.075s 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:29.870 ************************************ 00:12:29.870 END TEST filesystem_in_capsule_ext4 00:12:29.870 ************************************ 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.870 ************************************ 00:12:29.870 START TEST filesystem_in_capsule_btrfs 00:12:29.870 ************************************ 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:29.870 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:29.870 btrfs-progs v6.8.1 00:12:29.870 See https://btrfs.readthedocs.io for more information. 00:12:29.870 00:12:29.870 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:29.870 NOTE: several default settings have changed in version 5.15, please make sure 00:12:29.870 this does not affect your deployments: 00:12:29.870 - DUP for metadata (-m dup) 00:12:29.870 - enabled no-holes (-O no-holes) 00:12:29.870 - enabled free-space-tree (-R free-space-tree) 00:12:29.870 00:12:29.870 Label: (null) 00:12:29.870 UUID: ce81b9b0-2efd-4144-99b6-039fef9bc800 00:12:29.870 Node size: 16384 00:12:29.870 Sector size: 4096 (CPU page size: 4096) 00:12:29.870 Filesystem size: 510.00MiB 00:12:29.870 Block group profiles: 00:12:29.871 Data: single 8.00MiB 00:12:29.871 Metadata: DUP 32.00MiB 00:12:29.871 System: DUP 8.00MiB 00:12:29.871 SSD detected: yes 00:12:29.871 Zoned device: no 00:12:29.871 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:29.871 Checksum: crc32c 00:12:29.871 Number of devices: 1 00:12:29.871 Devices: 00:12:29.871 ID SIZE PATH 00:12:29.871 1 510.00MiB /dev/nvme0n1p1 00:12:29.871 00:12:29.871 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:29.871 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:29.871 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:29.871 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:29.871 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:29.871 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:29.871 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:29.871 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:29.871 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2897143 00:12:29.871 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:29.871 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:29.871 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:29.871 02:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:29.871 00:12:29.871 real 0m0.646s 00:12:29.871 user 0m0.017s 00:12:29.871 sys 0m0.104s 00:12:29.871 02:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.871 02:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:29.871 ************************************ 00:12:29.871 END TEST filesystem_in_capsule_btrfs 00:12:29.871 ************************************ 00:12:29.871 02:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:29.871 02:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:29.871 02:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.871 02:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.871 ************************************ 00:12:29.871 START TEST filesystem_in_capsule_xfs 00:12:29.871 ************************************ 00:12:29.871 02:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:29.871 02:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:29.871 02:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:29.871 02:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:29.871 02:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:29.871 02:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:29.871 02:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:29.871 02:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:29.871 02:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:29.871 02:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:29.871 02:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:29.871 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:29.871 = sectsz=512 attr=2, projid32bit=1 00:12:29.871 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:29.871 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:29.871 data = bsize=4096 blocks=130560, imaxpct=25 00:12:29.871 = sunit=0 swidth=0 blks 00:12:29.871 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:29.871 log =internal log bsize=4096 blocks=16384, version=2 00:12:29.871 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:29.871 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:30.805 Discarding blocks...Done. 00:12:30.805 02:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:30.805 02:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:33.333 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:33.333 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:33.333 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:33.333 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:33.333 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:33.333 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:33.333 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2897143 00:12:33.333 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:33.333 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:33.333 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:33.333 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:33.333 00:12:33.333 real 0m3.388s 00:12:33.333 user 0m0.018s 00:12:33.333 sys 0m0.058s 00:12:33.333 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.333 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:33.333 ************************************ 00:12:33.333 END TEST filesystem_in_capsule_xfs 00:12:33.333 ************************************ 00:12:33.333 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:33.333 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:33.333 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2897143 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2897143 ']' 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2897143 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2897143 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2897143' 00:12:33.592 killing process with pid 2897143 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2897143 00:12:33.592 02:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2897143 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:36.126 00:12:36.126 real 0m20.593s 00:12:36.126 user 1m18.060s 00:12:36.126 sys 0m2.534s 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:36.126 ************************************ 00:12:36.126 END TEST nvmf_filesystem_in_capsule 00:12:36.126 ************************************ 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:36.126 rmmod nvme_tcp 00:12:36.126 rmmod nvme_fabrics 00:12:36.126 rmmod nvme_keyring 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.126 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.656 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:38.656 00:12:38.656 real 0m46.235s 00:12:38.656 user 2m37.971s 00:12:38.656 sys 0m6.908s 00:12:38.656 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.656 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:38.656 ************************************ 00:12:38.656 END TEST nvmf_filesystem 00:12:38.656 ************************************ 00:12:38.656 02:32:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:38.656 02:32:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:38.656 02:32:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.656 02:32:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:38.656 ************************************ 00:12:38.656 START TEST nvmf_target_discovery 00:12:38.656 ************************************ 00:12:38.656 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:38.656 * Looking for test storage... 00:12:38.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:38.656 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:38.656 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:38.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.657 --rc genhtml_branch_coverage=1 00:12:38.657 --rc genhtml_function_coverage=1 00:12:38.657 --rc genhtml_legend=1 00:12:38.657 --rc geninfo_all_blocks=1 00:12:38.657 --rc geninfo_unexecuted_blocks=1 00:12:38.657 00:12:38.657 ' 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:38.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.657 --rc genhtml_branch_coverage=1 00:12:38.657 --rc genhtml_function_coverage=1 00:12:38.657 --rc genhtml_legend=1 00:12:38.657 --rc geninfo_all_blocks=1 00:12:38.657 --rc geninfo_unexecuted_blocks=1 00:12:38.657 00:12:38.657 ' 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:38.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.657 --rc genhtml_branch_coverage=1 00:12:38.657 --rc genhtml_function_coverage=1 00:12:38.657 --rc genhtml_legend=1 00:12:38.657 --rc geninfo_all_blocks=1 00:12:38.657 --rc geninfo_unexecuted_blocks=1 00:12:38.657 00:12:38.657 ' 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:38.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.657 --rc genhtml_branch_coverage=1 00:12:38.657 --rc genhtml_function_coverage=1 00:12:38.657 --rc genhtml_legend=1 00:12:38.657 --rc geninfo_all_blocks=1 00:12:38.657 --rc geninfo_unexecuted_blocks=1 00:12:38.657 00:12:38.657 ' 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.657 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:38.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:38.658 02:32:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:40.555 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:40.555 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:40.555 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:40.555 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:40.555 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:40.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:12:40.556 00:12:40.556 --- 10.0.0.2 ping statistics --- 00:12:40.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.556 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:40.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:12:40.556 00:12:40.556 --- 10.0.0.1 ping statistics --- 00:12:40.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.556 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2901699 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2901699 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2901699 ']' 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.556 02:32:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.813 [2024-11-17 02:32:49.060057] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:40.813 [2024-11-17 02:32:49.060237] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.814 [2024-11-17 02:32:49.204692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:41.071 [2024-11-17 02:32:49.341108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.071 [2024-11-17 02:32:49.341203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.071 [2024-11-17 02:32:49.341228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.071 [2024-11-17 02:32:49.341253] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.071 [2024-11-17 02:32:49.341277] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.071 [2024-11-17 02:32:49.344334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.071 [2024-11-17 02:32:49.344405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:41.071 [2024-11-17 02:32:49.344497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.071 [2024-11-17 02:32:49.344510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:41.637 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.637 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:41.637 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:41.637 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:41.637 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.896 [2024-11-17 02:32:50.107886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.896 Null1 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.896 [2024-11-17 02:32:50.168464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.896 Null2 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.896 Null3 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:41.896 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.897 Null4 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.897 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:42.155 00:12:42.155 Discovery Log Number of Records 6, Generation counter 6 00:12:42.155 =====Discovery Log Entry 0====== 00:12:42.155 trtype: tcp 00:12:42.155 adrfam: ipv4 00:12:42.155 subtype: current discovery subsystem 00:12:42.155 treq: not required 00:12:42.155 portid: 0 00:12:42.155 trsvcid: 4420 00:12:42.155 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:42.155 traddr: 10.0.0.2 00:12:42.155 eflags: explicit discovery connections, duplicate discovery information 00:12:42.155 sectype: none 00:12:42.155 =====Discovery Log Entry 1====== 00:12:42.155 trtype: tcp 00:12:42.155 adrfam: ipv4 00:12:42.155 subtype: nvme subsystem 00:12:42.155 treq: not required 00:12:42.155 portid: 0 00:12:42.155 trsvcid: 4420 00:12:42.155 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:42.155 traddr: 10.0.0.2 00:12:42.155 eflags: none 00:12:42.155 sectype: none 00:12:42.155 =====Discovery Log Entry 2====== 00:12:42.155 trtype: tcp 00:12:42.155 adrfam: ipv4 00:12:42.155 subtype: nvme subsystem 00:12:42.155 treq: not required 00:12:42.155 portid: 0 00:12:42.155 trsvcid: 4420 00:12:42.155 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:42.155 traddr: 10.0.0.2 00:12:42.155 eflags: none 00:12:42.155 sectype: none 00:12:42.155 =====Discovery Log Entry 3====== 00:12:42.155 trtype: tcp 00:12:42.155 adrfam: ipv4 00:12:42.155 subtype: nvme subsystem 00:12:42.155 treq: not required 00:12:42.155 portid: 0 00:12:42.155 trsvcid: 4420 00:12:42.155 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:42.155 traddr: 10.0.0.2 00:12:42.155 eflags: none 00:12:42.155 sectype: none 00:12:42.155 =====Discovery Log Entry 4====== 00:12:42.155 trtype: tcp 00:12:42.155 adrfam: ipv4 00:12:42.155 subtype: nvme subsystem 00:12:42.155 treq: not required 00:12:42.155 portid: 0 00:12:42.155 trsvcid: 4420 00:12:42.155 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:42.155 traddr: 10.0.0.2 00:12:42.155 eflags: none 00:12:42.155 sectype: none 00:12:42.155 =====Discovery Log Entry 5====== 00:12:42.155 trtype: tcp 00:12:42.155 adrfam: ipv4 00:12:42.155 subtype: discovery subsystem referral 00:12:42.155 treq: not required 00:12:42.155 portid: 0 00:12:42.155 trsvcid: 4430 00:12:42.155 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:42.155 traddr: 10.0.0.2 00:12:42.155 eflags: none 00:12:42.155 sectype: none 00:12:42.155 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:42.155 Perform nvmf subsystem discovery via RPC 00:12:42.155 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:42.155 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.155 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.155 [ 00:12:42.155 { 00:12:42.155 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:42.155 "subtype": "Discovery", 00:12:42.155 "listen_addresses": [ 00:12:42.155 { 00:12:42.155 "trtype": "TCP", 00:12:42.155 "adrfam": "IPv4", 00:12:42.155 "traddr": "10.0.0.2", 00:12:42.155 "trsvcid": "4420" 00:12:42.155 } 00:12:42.155 ], 00:12:42.156 "allow_any_host": true, 00:12:42.156 "hosts": [] 00:12:42.156 }, 00:12:42.156 { 00:12:42.156 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:42.156 "subtype": "NVMe", 00:12:42.156 "listen_addresses": [ 00:12:42.156 { 00:12:42.156 "trtype": "TCP", 00:12:42.156 "adrfam": "IPv4", 00:12:42.156 "traddr": "10.0.0.2", 00:12:42.156 "trsvcid": "4420" 00:12:42.156 } 00:12:42.156 ], 00:12:42.156 "allow_any_host": true, 00:12:42.156 "hosts": [], 00:12:42.156 "serial_number": "SPDK00000000000001", 00:12:42.156 "model_number": "SPDK bdev Controller", 00:12:42.156 "max_namespaces": 32, 00:12:42.156 "min_cntlid": 1, 00:12:42.156 "max_cntlid": 65519, 00:12:42.156 "namespaces": [ 00:12:42.156 { 00:12:42.156 "nsid": 1, 00:12:42.156 "bdev_name": "Null1", 00:12:42.156 "name": "Null1", 00:12:42.156 "nguid": "DE958A47F46B4EF283AB148C69B29390", 00:12:42.156 "uuid": "de958a47-f46b-4ef2-83ab-148c69b29390" 00:12:42.156 } 00:12:42.156 ] 00:12:42.156 }, 00:12:42.156 { 00:12:42.156 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:42.156 "subtype": "NVMe", 00:12:42.156 "listen_addresses": [ 00:12:42.156 { 00:12:42.156 "trtype": "TCP", 00:12:42.156 "adrfam": "IPv4", 00:12:42.156 "traddr": "10.0.0.2", 00:12:42.156 "trsvcid": "4420" 00:12:42.156 } 00:12:42.156 ], 00:12:42.156 "allow_any_host": true, 00:12:42.156 "hosts": [], 00:12:42.156 "serial_number": "SPDK00000000000002", 00:12:42.156 "model_number": "SPDK bdev Controller", 00:12:42.156 "max_namespaces": 32, 00:12:42.156 "min_cntlid": 1, 00:12:42.156 "max_cntlid": 65519, 00:12:42.156 "namespaces": [ 00:12:42.156 { 00:12:42.156 "nsid": 1, 00:12:42.156 "bdev_name": "Null2", 00:12:42.156 "name": "Null2", 00:12:42.156 "nguid": "764A5412941C482BBB279F3205759353", 00:12:42.156 "uuid": "764a5412-941c-482b-bb27-9f3205759353" 00:12:42.156 } 00:12:42.156 ] 00:12:42.156 }, 00:12:42.156 { 00:12:42.156 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:42.156 "subtype": "NVMe", 00:12:42.156 "listen_addresses": [ 00:12:42.156 { 00:12:42.156 "trtype": "TCP", 00:12:42.156 "adrfam": "IPv4", 00:12:42.156 "traddr": "10.0.0.2", 00:12:42.156 "trsvcid": "4420" 00:12:42.156 } 00:12:42.156 ], 00:12:42.156 "allow_any_host": true, 00:12:42.156 "hosts": [], 00:12:42.156 "serial_number": "SPDK00000000000003", 00:12:42.156 "model_number": "SPDK bdev Controller", 00:12:42.156 "max_namespaces": 32, 00:12:42.156 "min_cntlid": 1, 00:12:42.156 "max_cntlid": 65519, 00:12:42.156 "namespaces": [ 00:12:42.156 { 00:12:42.156 "nsid": 1, 00:12:42.156 "bdev_name": "Null3", 00:12:42.156 "name": "Null3", 00:12:42.156 "nguid": "11910A3480844EDAA7CB03BDADBB131E", 00:12:42.156 "uuid": "11910a34-8084-4eda-a7cb-03bdadbb131e" 00:12:42.156 } 00:12:42.156 ] 00:12:42.156 }, 00:12:42.156 { 00:12:42.156 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:42.156 "subtype": "NVMe", 00:12:42.156 "listen_addresses": [ 00:12:42.156 { 00:12:42.156 "trtype": "TCP", 00:12:42.156 "adrfam": "IPv4", 00:12:42.156 "traddr": "10.0.0.2", 00:12:42.156 "trsvcid": "4420" 00:12:42.156 } 00:12:42.156 ], 00:12:42.156 "allow_any_host": true, 00:12:42.156 "hosts": [], 00:12:42.156 "serial_number": "SPDK00000000000004", 00:12:42.156 "model_number": "SPDK bdev Controller", 00:12:42.156 "max_namespaces": 32, 00:12:42.156 "min_cntlid": 1, 00:12:42.156 "max_cntlid": 65519, 00:12:42.156 "namespaces": [ 00:12:42.156 { 00:12:42.156 "nsid": 1, 00:12:42.156 "bdev_name": "Null4", 00:12:42.156 "name": "Null4", 00:12:42.156 "nguid": "D79ED16AECD543728C137C31AFC0DEC6", 00:12:42.156 "uuid": "d79ed16a-ecd5-4372-8c13-7c31afc0dec6" 00:12:42.156 } 00:12:42.156 ] 00:12:42.156 } 00:12:42.156 ] 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.156 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:42.415 rmmod nvme_tcp 00:12:42.415 rmmod nvme_fabrics 00:12:42.415 rmmod nvme_keyring 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2901699 ']' 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2901699 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2901699 ']' 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2901699 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2901699 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2901699' 00:12:42.415 killing process with pid 2901699 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2901699 00:12:42.415 02:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2901699 00:12:43.793 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:43.793 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:43.793 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:43.793 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:43.793 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:43.793 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:43.793 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:43.793 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:43.793 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:43.793 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.793 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.793 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.701 02:32:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:45.701 00:12:45.701 real 0m7.324s 00:12:45.701 user 0m10.056s 00:12:45.701 sys 0m2.086s 00:12:45.701 02:32:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.701 02:32:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.701 ************************************ 00:12:45.701 END TEST nvmf_target_discovery 00:12:45.701 ************************************ 00:12:45.701 02:32:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:45.701 02:32:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:45.701 02:32:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.701 02:32:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:45.701 ************************************ 00:12:45.701 START TEST nvmf_referrals 00:12:45.702 ************************************ 00:12:45.702 02:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:45.702 * Looking for test storage... 00:12:45.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:45.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.702 --rc genhtml_branch_coverage=1 00:12:45.702 --rc genhtml_function_coverage=1 00:12:45.702 --rc genhtml_legend=1 00:12:45.702 --rc geninfo_all_blocks=1 00:12:45.702 --rc geninfo_unexecuted_blocks=1 00:12:45.702 00:12:45.702 ' 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:45.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.702 --rc genhtml_branch_coverage=1 00:12:45.702 --rc genhtml_function_coverage=1 00:12:45.702 --rc genhtml_legend=1 00:12:45.702 --rc geninfo_all_blocks=1 00:12:45.702 --rc geninfo_unexecuted_blocks=1 00:12:45.702 00:12:45.702 ' 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:45.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.702 --rc genhtml_branch_coverage=1 00:12:45.702 --rc genhtml_function_coverage=1 00:12:45.702 --rc genhtml_legend=1 00:12:45.702 --rc geninfo_all_blocks=1 00:12:45.702 --rc geninfo_unexecuted_blocks=1 00:12:45.702 00:12:45.702 ' 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:45.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.702 --rc genhtml_branch_coverage=1 00:12:45.702 --rc genhtml_function_coverage=1 00:12:45.702 --rc genhtml_legend=1 00:12:45.702 --rc geninfo_all_blocks=1 00:12:45.702 --rc geninfo_unexecuted_blocks=1 00:12:45.702 00:12:45.702 ' 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:45.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:45.702 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:45.703 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:45.703 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:45.703 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:45.703 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:45.703 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:45.703 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:45.703 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:45.703 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:45.703 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:45.703 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:45.703 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.703 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:45.703 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:45.703 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:45.703 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.703 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.703 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.703 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:45.703 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:45.703 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:45.703 02:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:48.237 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:48.237 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:48.237 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:48.237 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:48.237 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:48.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:12:48.238 00:12:48.238 --- 10.0.0.2 ping statistics --- 00:12:48.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.238 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:48.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:12:48.238 00:12:48.238 --- 10.0.0.1 ping statistics --- 00:12:48.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.238 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2904056 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2904056 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2904056 ']' 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:48.238 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.238 [2024-11-17 02:32:56.520241] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:48.238 [2024-11-17 02:32:56.520407] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.238 [2024-11-17 02:32:56.678943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:48.497 [2024-11-17 02:32:56.826043] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.497 [2024-11-17 02:32:56.826144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.497 [2024-11-17 02:32:56.826171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.497 [2024-11-17 02:32:56.826195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.497 [2024-11-17 02:32:56.826214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.497 [2024-11-17 02:32:56.829254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.497 [2024-11-17 02:32:56.829312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.497 [2024-11-17 02:32:56.829382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.497 [2024-11-17 02:32:56.829389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.432 [2024-11-17 02:32:57.554093] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.432 [2024-11-17 02:32:57.580943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:49.432 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.433 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.691 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:49.691 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:49.691 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:49.691 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:49.691 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:49.691 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:49.691 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:49.949 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:50.208 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:50.208 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:50.208 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:50.208 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:50.208 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:50.208 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:50.466 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:50.466 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:50.466 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.466 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.466 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.466 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:50.466 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:50.466 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:50.466 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.466 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:50.466 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.466 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:50.466 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.466 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:50.466 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:50.466 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:50.466 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:50.466 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:50.467 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:50.467 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:50.467 02:32:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:50.724 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:50.724 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:50.724 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:50.724 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:50.724 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:50.724 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:50.724 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:50.724 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:50.724 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:50.724 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:50.724 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:50.724 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:50.724 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:50.982 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:50.982 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:50.982 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.982 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.982 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.982 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:50.982 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.982 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:50.982 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.982 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.982 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:50.982 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:50.982 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:50.982 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:50.982 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:50.982 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:50.982 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:51.240 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:51.241 rmmod nvme_tcp 00:12:51.241 rmmod nvme_fabrics 00:12:51.241 rmmod nvme_keyring 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2904056 ']' 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2904056 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2904056 ']' 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2904056 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2904056 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2904056' 00:12:51.241 killing process with pid 2904056 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2904056 00:12:51.241 02:32:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2904056 00:12:52.617 02:33:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:52.617 02:33:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:52.617 02:33:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:52.617 02:33:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:52.617 02:33:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:52.617 02:33:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:52.617 02:33:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:52.617 02:33:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:52.617 02:33:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:52.617 02:33:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.617 02:33:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.617 02:33:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.525 02:33:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:54.525 00:12:54.525 real 0m8.878s 00:12:54.525 user 0m16.491s 00:12:54.525 sys 0m2.572s 00:12:54.525 02:33:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.525 02:33:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.525 ************************************ 00:12:54.525 END TEST nvmf_referrals 00:12:54.525 ************************************ 00:12:54.525 02:33:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:54.525 02:33:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:54.525 02:33:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.525 02:33:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:54.525 ************************************ 00:12:54.525 START TEST nvmf_connect_disconnect 00:12:54.525 ************************************ 00:12:54.525 02:33:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:54.525 * Looking for test storage... 00:12:54.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.525 02:33:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:54.525 02:33:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:54.525 02:33:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:54.786 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:54.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.787 --rc genhtml_branch_coverage=1 00:12:54.787 --rc genhtml_function_coverage=1 00:12:54.787 --rc genhtml_legend=1 00:12:54.787 --rc geninfo_all_blocks=1 00:12:54.787 --rc geninfo_unexecuted_blocks=1 00:12:54.787 00:12:54.787 ' 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:54.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.787 --rc genhtml_branch_coverage=1 00:12:54.787 --rc genhtml_function_coverage=1 00:12:54.787 --rc genhtml_legend=1 00:12:54.787 --rc geninfo_all_blocks=1 00:12:54.787 --rc geninfo_unexecuted_blocks=1 00:12:54.787 00:12:54.787 ' 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:54.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.787 --rc genhtml_branch_coverage=1 00:12:54.787 --rc genhtml_function_coverage=1 00:12:54.787 --rc genhtml_legend=1 00:12:54.787 --rc geninfo_all_blocks=1 00:12:54.787 --rc geninfo_unexecuted_blocks=1 00:12:54.787 00:12:54.787 ' 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:54.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.787 --rc genhtml_branch_coverage=1 00:12:54.787 --rc genhtml_function_coverage=1 00:12:54.787 --rc genhtml_legend=1 00:12:54.787 --rc geninfo_all_blocks=1 00:12:54.787 --rc geninfo_unexecuted_blocks=1 00:12:54.787 00:12:54.787 ' 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:54.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:54.787 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.685 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:56.685 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:56.685 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:56.685 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:56.685 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:56.685 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:56.685 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:56.685 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:56.685 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:56.685 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:56.685 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:56.686 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:56.686 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:56.686 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:56.686 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:56.686 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:56.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:12:56.945 00:12:56.945 --- 10.0.0.2 ping statistics --- 00:12:56.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.945 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:56.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:12:56.945 00:12:56.945 --- 10.0.0.1 ping statistics --- 00:12:56.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.945 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2906612 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2906612 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2906612 ']' 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.945 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.945 [2024-11-17 02:33:05.328773] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:56.945 [2024-11-17 02:33:05.328928] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.204 [2024-11-17 02:33:05.490795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:57.204 [2024-11-17 02:33:05.637408] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.204 [2024-11-17 02:33:05.637501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.204 [2024-11-17 02:33:05.637526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.204 [2024-11-17 02:33:05.637550] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.204 [2024-11-17 02:33:05.637569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.204 [2024-11-17 02:33:05.640477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.204 [2024-11-17 02:33:05.640537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.204 [2024-11-17 02:33:05.640569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.204 [2024-11-17 02:33:05.640563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:58.134 [2024-11-17 02:33:06.331164] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:58.134 [2024-11-17 02:33:06.454368] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:58.134 02:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:00.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:53.808 rmmod nvme_tcp 00:16:53.808 rmmod nvme_fabrics 00:16:53.808 rmmod nvme_keyring 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2906612 ']' 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2906612 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2906612 ']' 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2906612 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2906612 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2906612' 00:16:53.808 killing process with pid 2906612 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2906612 00:16:53.808 02:37:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2906612 00:16:55.183 02:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:55.183 02:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:55.183 02:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:55.183 02:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:55.183 02:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:55.183 02:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:55.183 02:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:55.183 02:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:55.183 02:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:55.183 02:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.183 02:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:55.183 02:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:57.090 00:16:57.090 real 4m2.426s 00:16:57.090 user 15m16.461s 00:16:57.090 sys 0m39.364s 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:57.090 ************************************ 00:16:57.090 END TEST nvmf_connect_disconnect 00:16:57.090 ************************************ 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:57.090 ************************************ 00:16:57.090 START TEST nvmf_multitarget 00:16:57.090 ************************************ 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:57.090 * Looking for test storage... 00:16:57.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:57.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.090 --rc genhtml_branch_coverage=1 00:16:57.090 --rc genhtml_function_coverage=1 00:16:57.090 --rc genhtml_legend=1 00:16:57.090 --rc geninfo_all_blocks=1 00:16:57.090 --rc geninfo_unexecuted_blocks=1 00:16:57.090 00:16:57.090 ' 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:57.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.090 --rc genhtml_branch_coverage=1 00:16:57.090 --rc genhtml_function_coverage=1 00:16:57.090 --rc genhtml_legend=1 00:16:57.090 --rc geninfo_all_blocks=1 00:16:57.090 --rc geninfo_unexecuted_blocks=1 00:16:57.090 00:16:57.090 ' 00:16:57.090 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:57.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.090 --rc genhtml_branch_coverage=1 00:16:57.090 --rc genhtml_function_coverage=1 00:16:57.090 --rc genhtml_legend=1 00:16:57.090 --rc geninfo_all_blocks=1 00:16:57.090 --rc geninfo_unexecuted_blocks=1 00:16:57.090 00:16:57.090 ' 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:57.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.091 --rc genhtml_branch_coverage=1 00:16:57.091 --rc genhtml_function_coverage=1 00:16:57.091 --rc genhtml_legend=1 00:16:57.091 --rc geninfo_all_blocks=1 00:16:57.091 --rc geninfo_unexecuted_blocks=1 00:16:57.091 00:16:57.091 ' 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:57.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:57.091 02:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:59.625 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:59.625 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:59.625 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:59.625 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:59.626 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:59.626 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:59.626 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:59.626 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:59.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:59.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:16:59.626 00:16:59.626 --- 10.0.0.2 ping statistics --- 00:16:59.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.626 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:59.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:59.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:16:59.626 00:16:59.626 --- 10.0.0.1 ping statistics --- 00:16:59.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.626 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:59.626 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:59.627 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:59.627 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:59.627 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:59.627 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:59.627 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:59.627 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:59.627 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:59.627 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:59.627 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:59.627 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:59.627 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:59.627 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2938868 00:16:59.627 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:59.627 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2938868 00:16:59.627 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2938868 ']' 00:16:59.627 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.627 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.627 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.627 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.627 02:37:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:59.627 [2024-11-17 02:37:07.918793] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:59.627 [2024-11-17 02:37:07.918931] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.627 [2024-11-17 02:37:08.071141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:59.885 [2024-11-17 02:37:08.199326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.885 [2024-11-17 02:37:08.199404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.885 [2024-11-17 02:37:08.199426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.885 [2024-11-17 02:37:08.199446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.885 [2024-11-17 02:37:08.199462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.885 [2024-11-17 02:37:08.202191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.885 [2024-11-17 02:37:08.202274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.885 [2024-11-17 02:37:08.202305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.885 [2024-11-17 02:37:08.202314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:00.818 02:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.818 02:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:17:00.818 02:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:00.818 02:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:00.818 02:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:00.818 02:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.818 02:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:00.818 02:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:00.818 02:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:00.818 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:00.818 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:00.818 "nvmf_tgt_1" 00:17:00.818 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:01.076 "nvmf_tgt_2" 00:17:01.076 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:01.076 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:01.076 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:01.076 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:01.334 true 00:17:01.334 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:01.334 true 00:17:01.334 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:01.334 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:01.592 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:01.592 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:01.592 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:01.592 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:01.592 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:01.592 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:01.592 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:01.592 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:01.592 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:01.592 rmmod nvme_tcp 00:17:01.592 rmmod nvme_fabrics 00:17:01.592 rmmod nvme_keyring 00:17:01.592 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:01.592 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:01.592 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:01.592 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2938868 ']' 00:17:01.592 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2938868 00:17:01.592 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2938868 ']' 00:17:01.592 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2938868 00:17:01.592 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:17:01.592 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.593 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2938868 00:17:01.593 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:01.593 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:01.593 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2938868' 00:17:01.593 killing process with pid 2938868 00:17:01.593 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2938868 00:17:01.593 02:37:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2938868 00:17:02.968 02:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:02.968 02:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:02.968 02:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:02.968 02:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:02.968 02:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:17:02.968 02:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:02.968 02:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:17:02.968 02:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:02.968 02:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:02.968 02:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.968 02:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:02.968 02:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:04.872 00:17:04.872 real 0m7.719s 00:17:04.872 user 0m12.581s 00:17:04.872 sys 0m2.240s 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:04.872 ************************************ 00:17:04.872 END TEST nvmf_multitarget 00:17:04.872 ************************************ 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:04.872 ************************************ 00:17:04.872 START TEST nvmf_rpc 00:17:04.872 ************************************ 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:04.872 * Looking for test storage... 00:17:04.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:04.872 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:04.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.873 --rc genhtml_branch_coverage=1 00:17:04.873 --rc genhtml_function_coverage=1 00:17:04.873 --rc genhtml_legend=1 00:17:04.873 --rc geninfo_all_blocks=1 00:17:04.873 --rc geninfo_unexecuted_blocks=1 00:17:04.873 00:17:04.873 ' 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:04.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.873 --rc genhtml_branch_coverage=1 00:17:04.873 --rc genhtml_function_coverage=1 00:17:04.873 --rc genhtml_legend=1 00:17:04.873 --rc geninfo_all_blocks=1 00:17:04.873 --rc geninfo_unexecuted_blocks=1 00:17:04.873 00:17:04.873 ' 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:04.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.873 --rc genhtml_branch_coverage=1 00:17:04.873 --rc genhtml_function_coverage=1 00:17:04.873 --rc genhtml_legend=1 00:17:04.873 --rc geninfo_all_blocks=1 00:17:04.873 --rc geninfo_unexecuted_blocks=1 00:17:04.873 00:17:04.873 ' 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:04.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.873 --rc genhtml_branch_coverage=1 00:17:04.873 --rc genhtml_function_coverage=1 00:17:04.873 --rc genhtml_legend=1 00:17:04.873 --rc geninfo_all_blocks=1 00:17:04.873 --rc geninfo_unexecuted_blocks=1 00:17:04.873 00:17:04.873 ' 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:04.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:04.873 02:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:07.404 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:07.404 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:07.404 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:07.405 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:07.405 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:07.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:07.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:17:07.405 00:17:07.405 --- 10.0.0.2 ping statistics --- 00:17:07.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.405 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:07.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:07.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:17:07.405 00:17:07.405 --- 10.0.0.1 ping statistics --- 00:17:07.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.405 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2941226 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2941226 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2941226 ']' 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:07.405 02:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.405 [2024-11-17 02:37:15.607520] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:07.405 [2024-11-17 02:37:15.607681] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.405 [2024-11-17 02:37:15.753182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:07.664 [2024-11-17 02:37:15.891419] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.664 [2024-11-17 02:37:15.891505] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.664 [2024-11-17 02:37:15.891531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:07.664 [2024-11-17 02:37:15.891556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:07.664 [2024-11-17 02:37:15.891575] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.664 [2024-11-17 02:37:15.894557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.664 [2024-11-17 02:37:15.894604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.664 [2024-11-17 02:37:15.894687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.664 [2024-11-17 02:37:15.894708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:08.230 "tick_rate": 2700000000, 00:17:08.230 "poll_groups": [ 00:17:08.230 { 00:17:08.230 "name": "nvmf_tgt_poll_group_000", 00:17:08.230 "admin_qpairs": 0, 00:17:08.230 "io_qpairs": 0, 00:17:08.230 "current_admin_qpairs": 0, 00:17:08.230 "current_io_qpairs": 0, 00:17:08.230 "pending_bdev_io": 0, 00:17:08.230 "completed_nvme_io": 0, 00:17:08.230 "transports": [] 00:17:08.230 }, 00:17:08.230 { 00:17:08.230 "name": "nvmf_tgt_poll_group_001", 00:17:08.230 "admin_qpairs": 0, 00:17:08.230 "io_qpairs": 0, 00:17:08.230 "current_admin_qpairs": 0, 00:17:08.230 "current_io_qpairs": 0, 00:17:08.230 "pending_bdev_io": 0, 00:17:08.230 "completed_nvme_io": 0, 00:17:08.230 "transports": [] 00:17:08.230 }, 00:17:08.230 { 00:17:08.230 "name": "nvmf_tgt_poll_group_002", 00:17:08.230 "admin_qpairs": 0, 00:17:08.230 "io_qpairs": 0, 00:17:08.230 "current_admin_qpairs": 0, 00:17:08.230 "current_io_qpairs": 0, 00:17:08.230 "pending_bdev_io": 0, 00:17:08.230 "completed_nvme_io": 0, 00:17:08.230 "transports": [] 00:17:08.230 }, 00:17:08.230 { 00:17:08.230 "name": "nvmf_tgt_poll_group_003", 00:17:08.230 "admin_qpairs": 0, 00:17:08.230 "io_qpairs": 0, 00:17:08.230 "current_admin_qpairs": 0, 00:17:08.230 "current_io_qpairs": 0, 00:17:08.230 "pending_bdev_io": 0, 00:17:08.230 "completed_nvme_io": 0, 00:17:08.230 "transports": [] 00:17:08.230 } 00:17:08.230 ] 00:17:08.230 }' 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.230 [2024-11-17 02:37:16.662736] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.230 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.488 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.488 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:08.488 "tick_rate": 2700000000, 00:17:08.488 "poll_groups": [ 00:17:08.488 { 00:17:08.488 "name": "nvmf_tgt_poll_group_000", 00:17:08.488 "admin_qpairs": 0, 00:17:08.488 "io_qpairs": 0, 00:17:08.488 "current_admin_qpairs": 0, 00:17:08.489 "current_io_qpairs": 0, 00:17:08.489 "pending_bdev_io": 0, 00:17:08.489 "completed_nvme_io": 0, 00:17:08.489 "transports": [ 00:17:08.489 { 00:17:08.489 "trtype": "TCP" 00:17:08.489 } 00:17:08.489 ] 00:17:08.489 }, 00:17:08.489 { 00:17:08.489 "name": "nvmf_tgt_poll_group_001", 00:17:08.489 "admin_qpairs": 0, 00:17:08.489 "io_qpairs": 0, 00:17:08.489 "current_admin_qpairs": 0, 00:17:08.489 "current_io_qpairs": 0, 00:17:08.489 "pending_bdev_io": 0, 00:17:08.489 "completed_nvme_io": 0, 00:17:08.489 "transports": [ 00:17:08.489 { 00:17:08.489 "trtype": "TCP" 00:17:08.489 } 00:17:08.489 ] 00:17:08.489 }, 00:17:08.489 { 00:17:08.489 "name": "nvmf_tgt_poll_group_002", 00:17:08.489 "admin_qpairs": 0, 00:17:08.489 "io_qpairs": 0, 00:17:08.489 "current_admin_qpairs": 0, 00:17:08.489 "current_io_qpairs": 0, 00:17:08.489 "pending_bdev_io": 0, 00:17:08.489 "completed_nvme_io": 0, 00:17:08.489 "transports": [ 00:17:08.489 { 00:17:08.489 "trtype": "TCP" 00:17:08.489 } 00:17:08.489 ] 00:17:08.489 }, 00:17:08.489 { 00:17:08.489 "name": "nvmf_tgt_poll_group_003", 00:17:08.489 "admin_qpairs": 0, 00:17:08.489 "io_qpairs": 0, 00:17:08.489 "current_admin_qpairs": 0, 00:17:08.489 "current_io_qpairs": 0, 00:17:08.489 "pending_bdev_io": 0, 00:17:08.489 "completed_nvme_io": 0, 00:17:08.489 "transports": [ 00:17:08.489 { 00:17:08.489 "trtype": "TCP" 00:17:08.489 } 00:17:08.489 ] 00:17:08.489 } 00:17:08.489 ] 00:17:08.489 }' 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.489 Malloc1 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.489 [2024-11-17 02:37:16.866284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:08.489 [2024-11-17 02:37:16.889538] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:17:08.489 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:08.489 could not add new controller: failed to write to nvme-fabrics device 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.489 02:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:09.424 02:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:09.424 02:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:09.424 02:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:09.424 02:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:09.424 02:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:11.325 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:11.325 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:11.325 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:11.325 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:11.325 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:11.325 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:11.325 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:11.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.325 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:11.325 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:11.325 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:11.325 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:11.325 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:11.325 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:11.583 [2024-11-17 02:37:19.820855] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:17:11.583 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:11.583 could not add new controller: failed to write to nvme-fabrics device 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.583 02:37:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:12.150 02:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:12.150 02:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:12.150 02:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:12.150 02:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:12.150 02:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:14.676 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:14.676 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:14.676 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:14.676 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:14.676 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:14.676 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:14.676 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:14.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.676 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:14.676 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:14.676 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:14.676 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:14.676 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:14.676 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:14.676 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:14.676 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:14.676 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.676 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.676 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.677 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:14.677 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:14.677 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:14.677 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.677 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.677 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.677 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:14.677 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.677 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.677 [2024-11-17 02:37:22.731649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.677 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.677 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:14.677 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.677 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.677 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.677 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:14.677 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.677 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.677 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.677 02:37:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:14.935 02:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:14.935 02:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:14.935 02:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:14.935 02:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:14.935 02:37:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:17.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.462 [2024-11-17 02:37:25.587764] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.462 02:37:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:18.037 02:37:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:18.037 02:37:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:18.037 02:37:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:18.037 02:37:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:18.037 02:37:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:19.968 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:19.968 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:19.968 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:19.968 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:19.968 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:19.968 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:19.968 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:20.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.226 [2024-11-17 02:37:28.489050] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.226 02:37:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:20.792 02:37:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:20.792 02:37:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:20.792 02:37:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:20.792 02:37:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:20.792 02:37:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:22.689 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:22.689 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:22.689 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:22.689 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:22.689 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:22.689 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:22.689 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:22.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.948 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.948 [2024-11-17 02:37:31.331399] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:22.949 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.949 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:22.949 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.949 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.949 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.949 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:22.949 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.949 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.949 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.949 02:37:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:23.883 02:37:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:23.883 02:37:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:23.883 02:37:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:23.883 02:37:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:23.883 02:37:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:25.782 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:25.782 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:25.782 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:25.782 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:25.782 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:25.782 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:25.782 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:25.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.782 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:25.782 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:25.782 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:25.782 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:25.782 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:25.782 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:25.782 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:25.782 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:25.782 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.782 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.782 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.783 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.783 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.783 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.783 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.783 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:25.783 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:25.783 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.783 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.783 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.783 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.783 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.783 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.783 [2024-11-17 02:37:34.223245] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.783 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.783 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:25.783 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.783 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.783 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.783 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:25.783 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.783 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.041 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.041 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:26.607 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:26.607 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:26.607 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:26.607 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:26.607 02:37:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:28.507 02:37:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:28.507 02:37:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:28.507 02:37:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:28.507 02:37:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:28.507 02:37:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:28.507 02:37:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:28.507 02:37:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:28.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.765 [2024-11-17 02:37:37.076413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.765 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.766 [2024-11-17 02:37:37.124479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.766 [2024-11-17 02:37:37.172608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.766 [2024-11-17 02:37:37.220811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.766 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:29.025 [2024-11-17 02:37:37.268950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.025 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:29.025 "tick_rate": 2700000000, 00:17:29.025 "poll_groups": [ 00:17:29.025 { 00:17:29.025 "name": "nvmf_tgt_poll_group_000", 00:17:29.025 "admin_qpairs": 2, 00:17:29.025 "io_qpairs": 84, 00:17:29.025 "current_admin_qpairs": 0, 00:17:29.025 "current_io_qpairs": 0, 00:17:29.025 "pending_bdev_io": 0, 00:17:29.025 "completed_nvme_io": 184, 00:17:29.025 "transports": [ 00:17:29.025 { 00:17:29.025 "trtype": "TCP" 00:17:29.025 } 00:17:29.025 ] 00:17:29.025 }, 00:17:29.025 { 00:17:29.025 "name": "nvmf_tgt_poll_group_001", 00:17:29.025 "admin_qpairs": 2, 00:17:29.025 "io_qpairs": 84, 00:17:29.025 "current_admin_qpairs": 0, 00:17:29.025 "current_io_qpairs": 0, 00:17:29.025 "pending_bdev_io": 0, 00:17:29.025 "completed_nvme_io": 184, 00:17:29.025 "transports": [ 00:17:29.025 { 00:17:29.025 "trtype": "TCP" 00:17:29.025 } 00:17:29.025 ] 00:17:29.025 }, 00:17:29.025 { 00:17:29.025 "name": "nvmf_tgt_poll_group_002", 00:17:29.025 "admin_qpairs": 1, 00:17:29.025 "io_qpairs": 84, 00:17:29.025 "current_admin_qpairs": 0, 00:17:29.025 "current_io_qpairs": 0, 00:17:29.025 "pending_bdev_io": 0, 00:17:29.025 "completed_nvme_io": 134, 00:17:29.025 "transports": [ 00:17:29.025 { 00:17:29.025 "trtype": "TCP" 00:17:29.025 } 00:17:29.025 ] 00:17:29.025 }, 00:17:29.025 { 00:17:29.025 "name": "nvmf_tgt_poll_group_003", 00:17:29.025 "admin_qpairs": 2, 00:17:29.025 "io_qpairs": 84, 00:17:29.025 "current_admin_qpairs": 0, 00:17:29.025 "current_io_qpairs": 0, 00:17:29.025 "pending_bdev_io": 0, 00:17:29.025 "completed_nvme_io": 184, 00:17:29.025 "transports": [ 00:17:29.025 { 00:17:29.026 "trtype": "TCP" 00:17:29.026 } 00:17:29.026 ] 00:17:29.026 } 00:17:29.026 ] 00:17:29.026 }' 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:29.026 rmmod nvme_tcp 00:17:29.026 rmmod nvme_fabrics 00:17:29.026 rmmod nvme_keyring 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2941226 ']' 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2941226 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2941226 ']' 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2941226 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2941226 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2941226' 00:17:29.026 killing process with pid 2941226 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2941226 00:17:29.026 02:37:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2941226 00:17:30.399 02:37:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:30.399 02:37:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:30.399 02:37:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:30.399 02:37:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:30.399 02:37:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:30.399 02:37:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:30.399 02:37:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:30.399 02:37:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:30.399 02:37:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:30.399 02:37:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.399 02:37:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.399 02:37:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.934 02:37:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:32.934 00:17:32.934 real 0m27.692s 00:17:32.934 user 1m29.301s 00:17:32.934 sys 0m4.552s 00:17:32.934 02:37:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.934 02:37:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.934 ************************************ 00:17:32.934 END TEST nvmf_rpc 00:17:32.934 ************************************ 00:17:32.934 02:37:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:32.934 02:37:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:32.934 02:37:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.934 02:37:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:32.934 ************************************ 00:17:32.934 START TEST nvmf_invalid 00:17:32.934 ************************************ 00:17:32.934 02:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:32.934 * Looking for test storage... 00:17:32.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:32.934 02:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:32.934 02:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:32.934 02:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:32.934 02:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:32.934 02:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:32.934 02:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:32.934 02:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:32.934 02:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:32.934 02:37:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:32.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.934 --rc genhtml_branch_coverage=1 00:17:32.934 --rc genhtml_function_coverage=1 00:17:32.934 --rc genhtml_legend=1 00:17:32.934 --rc geninfo_all_blocks=1 00:17:32.934 --rc geninfo_unexecuted_blocks=1 00:17:32.934 00:17:32.934 ' 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:32.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.934 --rc genhtml_branch_coverage=1 00:17:32.934 --rc genhtml_function_coverage=1 00:17:32.934 --rc genhtml_legend=1 00:17:32.934 --rc geninfo_all_blocks=1 00:17:32.934 --rc geninfo_unexecuted_blocks=1 00:17:32.934 00:17:32.934 ' 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:32.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.934 --rc genhtml_branch_coverage=1 00:17:32.934 --rc genhtml_function_coverage=1 00:17:32.934 --rc genhtml_legend=1 00:17:32.934 --rc geninfo_all_blocks=1 00:17:32.934 --rc geninfo_unexecuted_blocks=1 00:17:32.934 00:17:32.934 ' 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:32.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.934 --rc genhtml_branch_coverage=1 00:17:32.934 --rc genhtml_function_coverage=1 00:17:32.934 --rc genhtml_legend=1 00:17:32.934 --rc geninfo_all_blocks=1 00:17:32.934 --rc geninfo_unexecuted_blocks=1 00:17:32.934 00:17:32.934 ' 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.934 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:32.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:32.935 02:37:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:34.842 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:34.842 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:34.842 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:34.842 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:34.843 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:34.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:17:34.843 00:17:34.843 --- 10.0.0.2 ping statistics --- 00:17:34.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.843 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:34.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:17:34.843 00:17:34.843 --- 10.0.0.1 ping statistics --- 00:17:34.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.843 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2946046 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2946046 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2946046 ']' 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.843 02:37:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:35.102 [2024-11-17 02:37:43.379405] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:35.102 [2024-11-17 02:37:43.379534] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.102 [2024-11-17 02:37:43.532233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:35.361 [2024-11-17 02:37:43.678230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.361 [2024-11-17 02:37:43.678311] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.361 [2024-11-17 02:37:43.678341] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.361 [2024-11-17 02:37:43.678367] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.361 [2024-11-17 02:37:43.678386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.361 [2024-11-17 02:37:43.681157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.361 [2024-11-17 02:37:43.681194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.361 [2024-11-17 02:37:43.681244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.361 [2024-11-17 02:37:43.681251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:35.927 02:37:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:35.927 02:37:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:35.927 02:37:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:35.927 02:37:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:35.927 02:37:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:35.927 02:37:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.927 02:37:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:35.927 02:37:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3010 00:17:36.185 [2024-11-17 02:37:44.620814] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:36.185 02:37:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:36.185 { 00:17:36.185 "nqn": "nqn.2016-06.io.spdk:cnode3010", 00:17:36.185 "tgt_name": "foobar", 00:17:36.185 "method": "nvmf_create_subsystem", 00:17:36.185 "req_id": 1 00:17:36.185 } 00:17:36.185 Got JSON-RPC error response 00:17:36.185 response: 00:17:36.185 { 00:17:36.185 "code": -32603, 00:17:36.185 "message": "Unable to find target foobar" 00:17:36.185 }' 00:17:36.185 02:37:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:36.185 { 00:17:36.185 "nqn": "nqn.2016-06.io.spdk:cnode3010", 00:17:36.185 "tgt_name": "foobar", 00:17:36.185 "method": "nvmf_create_subsystem", 00:17:36.185 "req_id": 1 00:17:36.185 } 00:17:36.185 Got JSON-RPC error response 00:17:36.185 response: 00:17:36.185 { 00:17:36.185 "code": -32603, 00:17:36.185 "message": "Unable to find target foobar" 00:17:36.185 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:36.443 02:37:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:36.443 02:37:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode1687 00:17:36.443 [2024-11-17 02:37:44.885787] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1687: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:36.702 02:37:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:36.702 { 00:17:36.702 "nqn": "nqn.2016-06.io.spdk:cnode1687", 00:17:36.702 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:36.702 "method": "nvmf_create_subsystem", 00:17:36.702 "req_id": 1 00:17:36.702 } 00:17:36.702 Got JSON-RPC error response 00:17:36.702 response: 00:17:36.702 { 00:17:36.702 "code": -32602, 00:17:36.702 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:36.702 }' 00:17:36.702 02:37:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:36.702 { 00:17:36.702 "nqn": "nqn.2016-06.io.spdk:cnode1687", 00:17:36.702 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:36.702 "method": "nvmf_create_subsystem", 00:17:36.702 "req_id": 1 00:17:36.702 } 00:17:36.702 Got JSON-RPC error response 00:17:36.702 response: 00:17:36.702 { 00:17:36.702 "code": -32602, 00:17:36.702 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:36.702 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:36.702 02:37:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:36.702 02:37:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15948 00:17:36.702 [2024-11-17 02:37:45.150715] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15948: invalid model number 'SPDK_Controller' 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:36.961 { 00:17:36.961 "nqn": "nqn.2016-06.io.spdk:cnode15948", 00:17:36.961 "model_number": "SPDK_Controller\u001f", 00:17:36.961 "method": "nvmf_create_subsystem", 00:17:36.961 "req_id": 1 00:17:36.961 } 00:17:36.961 Got JSON-RPC error response 00:17:36.961 response: 00:17:36.961 { 00:17:36.961 "code": -32602, 00:17:36.961 "message": "Invalid MN SPDK_Controller\u001f" 00:17:36.961 }' 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:36.961 { 00:17:36.961 "nqn": "nqn.2016-06.io.spdk:cnode15948", 00:17:36.961 "model_number": "SPDK_Controller\u001f", 00:17:36.961 "method": "nvmf_create_subsystem", 00:17:36.961 "req_id": 1 00:17:36.961 } 00:17:36.961 Got JSON-RPC error response 00:17:36.961 response: 00:17:36.961 { 00:17:36.961 "code": -32602, 00:17:36.961 "message": "Invalid MN SPDK_Controller\u001f" 00:17:36.961 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.961 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ " == \- ]] 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '"~U%p?oBCP!vM5,R2|[8' 00:17:36.962 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '"~U%p?oBCP!vM5,R2|[8' nqn.2016-06.io.spdk:cnode13285 00:17:37.221 [2024-11-17 02:37:45.479831] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13285: invalid serial number '"~U%p?oBCP!vM5,R2|[8' 00:17:37.221 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:37.221 { 00:17:37.221 "nqn": "nqn.2016-06.io.spdk:cnode13285", 00:17:37.221 "serial_number": "\"~U%p?oBCP!vM5,R2\u007f|[8", 00:17:37.221 "method": "nvmf_create_subsystem", 00:17:37.221 "req_id": 1 00:17:37.221 } 00:17:37.221 Got JSON-RPC error response 00:17:37.221 response: 00:17:37.221 { 00:17:37.221 "code": -32602, 00:17:37.221 "message": "Invalid SN \"~U%p?oBCP!vM5,R2\u007f|[8" 00:17:37.221 }' 00:17:37.221 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:37.221 { 00:17:37.221 "nqn": "nqn.2016-06.io.spdk:cnode13285", 00:17:37.221 "serial_number": "\"~U%p?oBCP!vM5,R2\u007f|[8", 00:17:37.221 "method": "nvmf_create_subsystem", 00:17:37.221 "req_id": 1 00:17:37.221 } 00:17:37.221 Got JSON-RPC error response 00:17:37.221 response: 00:17:37.221 { 00:17:37.221 "code": -32602, 00:17:37.221 "message": "Invalid SN \"~U%p?oBCP!vM5,R2\u007f|[8" 00:17:37.221 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:37.221 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:37.221 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:37.221 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:37.221 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:37.221 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:37.221 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:37.221 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.221 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:37.221 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:37.221 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:37.221 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.221 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.221 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:37.221 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:37.221 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:37.221 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:37.222 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 4 == \- ]] 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '4zLs-v\(PE#DmT^ n:'\''+vRJf;7q1/:PH}Tjb~^[mv' 00:17:37.223 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '4zLs-v\(PE#DmT^ n:'\''+vRJf;7q1/:PH}Tjb~^[mv' nqn.2016-06.io.spdk:cnode25264 00:17:37.481 [2024-11-17 02:37:45.869212] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25264: invalid model number '4zLs-v\(PE#DmT^ n:'+vRJf;7q1/:PH}Tjb~^[mv' 00:17:37.481 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:37.481 { 00:17:37.481 "nqn": "nqn.2016-06.io.spdk:cnode25264", 00:17:37.481 "model_number": "4zLs-v\\(PE#DmT^ n:'\''+vRJf;7q1/:PH}Tjb~^[mv", 00:17:37.481 "method": "nvmf_create_subsystem", 00:17:37.481 "req_id": 1 00:17:37.481 } 00:17:37.481 Got JSON-RPC error response 00:17:37.481 response: 00:17:37.481 { 00:17:37.481 "code": -32602, 00:17:37.481 "message": "Invalid MN 4zLs-v\\(PE#DmT^ n:'\''+vRJf;7q1/:PH}Tjb~^[mv" 00:17:37.481 }' 00:17:37.481 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:37.481 { 00:17:37.481 "nqn": "nqn.2016-06.io.spdk:cnode25264", 00:17:37.481 "model_number": "4zLs-v\\(PE#DmT^ n:'+vRJf;7q1/:PH}Tjb~^[mv", 00:17:37.481 "method": "nvmf_create_subsystem", 00:17:37.481 "req_id": 1 00:17:37.481 } 00:17:37.481 Got JSON-RPC error response 00:17:37.481 response: 00:17:37.481 { 00:17:37.481 "code": -32602, 00:17:37.481 "message": "Invalid MN 4zLs-v\\(PE#DmT^ n:'+vRJf;7q1/:PH}Tjb~^[mv" 00:17:37.481 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:37.481 02:37:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:37.739 [2024-11-17 02:37:46.134190] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.739 02:37:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:37.998 02:37:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:37.998 02:37:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:37.998 02:37:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:37.998 02:37:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:37.998 02:37:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:38.256 [2024-11-17 02:37:46.690858] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:38.256 02:37:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:38.256 { 00:17:38.256 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:38.256 "listen_address": { 00:17:38.256 "trtype": "tcp", 00:17:38.256 "traddr": "", 00:17:38.256 "trsvcid": "4421" 00:17:38.256 }, 00:17:38.256 "method": "nvmf_subsystem_remove_listener", 00:17:38.256 "req_id": 1 00:17:38.256 } 00:17:38.256 Got JSON-RPC error response 00:17:38.256 response: 00:17:38.256 { 00:17:38.256 "code": -32602, 00:17:38.256 "message": "Invalid parameters" 00:17:38.256 }' 00:17:38.256 02:37:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:38.256 { 00:17:38.256 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:38.256 "listen_address": { 00:17:38.256 "trtype": "tcp", 00:17:38.256 "traddr": "", 00:17:38.256 "trsvcid": "4421" 00:17:38.256 }, 00:17:38.256 "method": "nvmf_subsystem_remove_listener", 00:17:38.256 "req_id": 1 00:17:38.256 } 00:17:38.256 Got JSON-RPC error response 00:17:38.256 response: 00:17:38.256 { 00:17:38.256 "code": -32602, 00:17:38.256 "message": "Invalid parameters" 00:17:38.256 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:38.256 02:37:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17517 -i 0 00:17:38.820 [2024-11-17 02:37:46.975762] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17517: invalid cntlid range [0-65519] 00:17:38.820 02:37:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:38.820 { 00:17:38.820 "nqn": "nqn.2016-06.io.spdk:cnode17517", 00:17:38.820 "min_cntlid": 0, 00:17:38.820 "method": "nvmf_create_subsystem", 00:17:38.820 "req_id": 1 00:17:38.820 } 00:17:38.820 Got JSON-RPC error response 00:17:38.820 response: 00:17:38.820 { 00:17:38.820 "code": -32602, 00:17:38.820 "message": "Invalid cntlid range [0-65519]" 00:17:38.820 }' 00:17:38.820 02:37:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:38.820 { 00:17:38.820 "nqn": "nqn.2016-06.io.spdk:cnode17517", 00:17:38.820 "min_cntlid": 0, 00:17:38.820 "method": "nvmf_create_subsystem", 00:17:38.820 "req_id": 1 00:17:38.820 } 00:17:38.820 Got JSON-RPC error response 00:17:38.820 response: 00:17:38.820 { 00:17:38.820 "code": -32602, 00:17:38.820 "message": "Invalid cntlid range [0-65519]" 00:17:38.820 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:38.820 02:37:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4501 -i 65520 00:17:38.820 [2024-11-17 02:37:47.244634] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4501: invalid cntlid range [65520-65519] 00:17:38.820 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:38.820 { 00:17:38.821 "nqn": "nqn.2016-06.io.spdk:cnode4501", 00:17:38.821 "min_cntlid": 65520, 00:17:38.821 "method": "nvmf_create_subsystem", 00:17:38.821 "req_id": 1 00:17:38.821 } 00:17:38.821 Got JSON-RPC error response 00:17:38.821 response: 00:17:38.821 { 00:17:38.821 "code": -32602, 00:17:38.821 "message": "Invalid cntlid range [65520-65519]" 00:17:38.821 }' 00:17:38.821 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:38.821 { 00:17:38.821 "nqn": "nqn.2016-06.io.spdk:cnode4501", 00:17:38.821 "min_cntlid": 65520, 00:17:38.821 "method": "nvmf_create_subsystem", 00:17:38.821 "req_id": 1 00:17:38.821 } 00:17:38.821 Got JSON-RPC error response 00:17:38.821 response: 00:17:38.821 { 00:17:38.821 "code": -32602, 00:17:38.821 "message": "Invalid cntlid range [65520-65519]" 00:17:38.821 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:38.821 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9803 -I 0 00:17:39.093 [2024-11-17 02:37:47.501537] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9803: invalid cntlid range [1-0] 00:17:39.093 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:39.093 { 00:17:39.093 "nqn": "nqn.2016-06.io.spdk:cnode9803", 00:17:39.093 "max_cntlid": 0, 00:17:39.093 "method": "nvmf_create_subsystem", 00:17:39.093 "req_id": 1 00:17:39.093 } 00:17:39.093 Got JSON-RPC error response 00:17:39.093 response: 00:17:39.093 { 00:17:39.093 "code": -32602, 00:17:39.093 "message": "Invalid cntlid range [1-0]" 00:17:39.093 }' 00:17:39.093 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:39.093 { 00:17:39.093 "nqn": "nqn.2016-06.io.spdk:cnode9803", 00:17:39.093 "max_cntlid": 0, 00:17:39.093 "method": "nvmf_create_subsystem", 00:17:39.093 "req_id": 1 00:17:39.093 } 00:17:39.093 Got JSON-RPC error response 00:17:39.093 response: 00:17:39.093 { 00:17:39.093 "code": -32602, 00:17:39.093 "message": "Invalid cntlid range [1-0]" 00:17:39.093 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:39.093 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18235 -I 65520 00:17:39.358 [2024-11-17 02:37:47.770539] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18235: invalid cntlid range [1-65520] 00:17:39.358 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:39.358 { 00:17:39.358 "nqn": "nqn.2016-06.io.spdk:cnode18235", 00:17:39.358 "max_cntlid": 65520, 00:17:39.358 "method": "nvmf_create_subsystem", 00:17:39.358 "req_id": 1 00:17:39.358 } 00:17:39.358 Got JSON-RPC error response 00:17:39.358 response: 00:17:39.358 { 00:17:39.358 "code": -32602, 00:17:39.358 "message": "Invalid cntlid range [1-65520]" 00:17:39.358 }' 00:17:39.358 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:39.358 { 00:17:39.358 "nqn": "nqn.2016-06.io.spdk:cnode18235", 00:17:39.358 "max_cntlid": 65520, 00:17:39.358 "method": "nvmf_create_subsystem", 00:17:39.358 "req_id": 1 00:17:39.358 } 00:17:39.358 Got JSON-RPC error response 00:17:39.358 response: 00:17:39.358 { 00:17:39.358 "code": -32602, 00:17:39.358 "message": "Invalid cntlid range [1-65520]" 00:17:39.358 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:39.358 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30290 -i 6 -I 5 00:17:39.615 [2024-11-17 02:37:48.047516] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30290: invalid cntlid range [6-5] 00:17:39.615 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:39.615 { 00:17:39.615 "nqn": "nqn.2016-06.io.spdk:cnode30290", 00:17:39.615 "min_cntlid": 6, 00:17:39.615 "max_cntlid": 5, 00:17:39.616 "method": "nvmf_create_subsystem", 00:17:39.616 "req_id": 1 00:17:39.616 } 00:17:39.616 Got JSON-RPC error response 00:17:39.616 response: 00:17:39.616 { 00:17:39.616 "code": -32602, 00:17:39.616 "message": "Invalid cntlid range [6-5]" 00:17:39.616 }' 00:17:39.616 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:39.616 { 00:17:39.616 "nqn": "nqn.2016-06.io.spdk:cnode30290", 00:17:39.616 "min_cntlid": 6, 00:17:39.616 "max_cntlid": 5, 00:17:39.616 "method": "nvmf_create_subsystem", 00:17:39.616 "req_id": 1 00:17:39.616 } 00:17:39.616 Got JSON-RPC error response 00:17:39.616 response: 00:17:39.616 { 00:17:39.616 "code": -32602, 00:17:39.616 "message": "Invalid cntlid range [6-5]" 00:17:39.616 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:39.616 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:39.874 { 00:17:39.874 "name": "foobar", 00:17:39.874 "method": "nvmf_delete_target", 00:17:39.874 "req_id": 1 00:17:39.874 } 00:17:39.874 Got JSON-RPC error response 00:17:39.874 response: 00:17:39.874 { 00:17:39.874 "code": -32602, 00:17:39.874 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:39.874 }' 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:39.874 { 00:17:39.874 "name": "foobar", 00:17:39.874 "method": "nvmf_delete_target", 00:17:39.874 "req_id": 1 00:17:39.874 } 00:17:39.874 Got JSON-RPC error response 00:17:39.874 response: 00:17:39.874 { 00:17:39.874 "code": -32602, 00:17:39.874 "message": "The specified target doesn't exist, cannot delete it." 00:17:39.874 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:39.874 rmmod nvme_tcp 00:17:39.874 rmmod nvme_fabrics 00:17:39.874 rmmod nvme_keyring 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2946046 ']' 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2946046 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2946046 ']' 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2946046 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2946046 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2946046' 00:17:39.874 killing process with pid 2946046 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2946046 00:17:39.874 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2946046 00:17:41.251 02:37:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:41.251 02:37:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:41.251 02:37:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:41.251 02:37:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:41.251 02:37:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:41.251 02:37:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:41.251 02:37:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:41.251 02:37:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:41.251 02:37:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:41.251 02:37:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.251 02:37:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.251 02:37:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:43.156 00:17:43.156 real 0m10.553s 00:17:43.156 user 0m26.158s 00:17:43.156 sys 0m2.692s 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:43.156 ************************************ 00:17:43.156 END TEST nvmf_invalid 00:17:43.156 ************************************ 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:43.156 ************************************ 00:17:43.156 START TEST nvmf_connect_stress 00:17:43.156 ************************************ 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:43.156 * Looking for test storage... 00:17:43.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:43.156 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:43.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.157 --rc genhtml_branch_coverage=1 00:17:43.157 --rc genhtml_function_coverage=1 00:17:43.157 --rc genhtml_legend=1 00:17:43.157 --rc geninfo_all_blocks=1 00:17:43.157 --rc geninfo_unexecuted_blocks=1 00:17:43.157 00:17:43.157 ' 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:43.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.157 --rc genhtml_branch_coverage=1 00:17:43.157 --rc genhtml_function_coverage=1 00:17:43.157 --rc genhtml_legend=1 00:17:43.157 --rc geninfo_all_blocks=1 00:17:43.157 --rc geninfo_unexecuted_blocks=1 00:17:43.157 00:17:43.157 ' 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:43.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.157 --rc genhtml_branch_coverage=1 00:17:43.157 --rc genhtml_function_coverage=1 00:17:43.157 --rc genhtml_legend=1 00:17:43.157 --rc geninfo_all_blocks=1 00:17:43.157 --rc geninfo_unexecuted_blocks=1 00:17:43.157 00:17:43.157 ' 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:43.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.157 --rc genhtml_branch_coverage=1 00:17:43.157 --rc genhtml_function_coverage=1 00:17:43.157 --rc genhtml_legend=1 00:17:43.157 --rc geninfo_all_blocks=1 00:17:43.157 --rc geninfo_unexecuted_blocks=1 00:17:43.157 00:17:43.157 ' 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:43.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.157 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.416 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:43.416 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:43.416 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:43.416 02:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:45.318 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.318 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:45.319 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:45.319 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:45.319 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:45.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:45.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:17:45.319 00:17:45.319 --- 10.0.0.2 ping statistics --- 00:17:45.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.319 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:45.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:45.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:17:45.319 00:17:45.319 --- 10.0.0.1 ping statistics --- 00:17:45.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.319 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2948878 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2948878 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2948878 ']' 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.319 02:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.578 [2024-11-17 02:37:53.826513] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:45.578 [2024-11-17 02:37:53.826679] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.578 [2024-11-17 02:37:54.015869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:45.836 [2024-11-17 02:37:54.157073] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.836 [2024-11-17 02:37:54.157185] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.836 [2024-11-17 02:37:54.157208] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:45.836 [2024-11-17 02:37:54.157228] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:45.836 [2024-11-17 02:37:54.157245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.836 [2024-11-17 02:37:54.159609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.836 [2024-11-17 02:37:54.159658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.836 [2024-11-17 02:37:54.159664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.771 [2024-11-17 02:37:54.915910] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.771 [2024-11-17 02:37:54.936190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.771 NULL1 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2949032 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.771 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.772 02:37:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.030 02:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.030 02:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:47.030 02:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.030 02:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.030 02:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.288 02:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.288 02:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:47.288 02:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.288 02:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.288 02:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.547 02:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.547 02:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:47.547 02:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.547 02:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.547 02:37:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.112 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.112 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:48.112 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.112 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.112 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.371 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.371 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:48.371 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.371 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.371 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.629 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.629 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:48.629 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.629 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.629 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.887 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.887 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:48.887 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.887 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.887 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.145 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.145 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:49.145 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.145 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.145 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.711 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.711 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:49.711 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.711 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.711 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.970 02:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.970 02:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:49.970 02:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.970 02:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.970 02:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.228 02:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.228 02:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:50.228 02:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.228 02:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.228 02:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.486 02:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.486 02:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:50.486 02:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.486 02:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.486 02:37:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.052 02:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.052 02:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:51.052 02:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.052 02:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.052 02:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.309 02:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.309 02:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:51.309 02:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.309 02:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.309 02:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.573 02:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.573 02:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:51.573 02:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.573 02:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.573 02:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.891 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.891 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:51.891 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.891 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.891 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.173 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.173 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:52.173 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.173 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.173 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.432 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.432 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:52.432 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.432 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.432 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.997 02:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.997 02:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:52.997 02:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.997 02:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.998 02:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.255 02:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.255 02:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:53.255 02:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.255 02:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.255 02:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.514 02:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.514 02:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:53.514 02:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.514 02:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.514 02:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.773 02:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.773 02:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:53.773 02:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.773 02:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.773 02:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.031 02:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.031 02:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:54.031 02:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.031 02:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.031 02:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.596 02:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.597 02:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:54.597 02:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.597 02:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.597 02:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.854 02:38:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.854 02:38:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:54.854 02:38:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.854 02:38:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.854 02:38:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.113 02:38:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.113 02:38:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:55.113 02:38:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.113 02:38:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.113 02:38:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.371 02:38:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.371 02:38:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:55.371 02:38:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.371 02:38:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.371 02:38:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.629 02:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.629 02:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:55.629 02:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.629 02:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.629 02:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.194 02:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.194 02:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:56.194 02:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.194 02:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.194 02:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.452 02:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.452 02:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:56.452 02:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.452 02:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.452 02:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.709 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.709 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:56.709 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.709 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.709 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.709 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:56.967 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.967 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949032 00:17:56.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2949032) - No such process 00:17:56.967 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2949032 00:17:56.967 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:56.967 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:56.968 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:56.968 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:56.968 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:56.968 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:56.968 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:56.968 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:56.968 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:56.968 rmmod nvme_tcp 00:17:56.968 rmmod nvme_fabrics 00:17:56.968 rmmod nvme_keyring 00:17:57.225 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:57.225 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:57.225 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:57.225 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2948878 ']' 00:17:57.225 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2948878 00:17:57.225 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2948878 ']' 00:17:57.225 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2948878 00:17:57.225 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:57.225 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.225 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2948878 00:17:57.225 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:57.225 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:57.225 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2948878' 00:17:57.225 killing process with pid 2948878 00:17:57.225 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2948878 00:17:57.225 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2948878 00:17:58.160 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:58.160 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:58.160 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:58.160 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:58.160 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:58.160 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:58.160 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:58.160 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:58.160 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:58.160 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.160 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:58.160 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.695 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:00.695 00:18:00.695 real 0m17.116s 00:18:00.695 user 0m43.113s 00:18:00.695 sys 0m5.951s 00:18:00.695 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.695 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.695 ************************************ 00:18:00.695 END TEST nvmf_connect_stress 00:18:00.695 ************************************ 00:18:00.695 02:38:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:00.695 02:38:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:00.695 02:38:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.695 02:38:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:00.695 ************************************ 00:18:00.695 START TEST nvmf_fused_ordering 00:18:00.695 ************************************ 00:18:00.695 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:00.695 * Looking for test storage... 00:18:00.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:00.695 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:00.695 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:18:00.695 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:00.695 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:00.695 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.695 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.695 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.695 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.695 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:00.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.696 --rc genhtml_branch_coverage=1 00:18:00.696 --rc genhtml_function_coverage=1 00:18:00.696 --rc genhtml_legend=1 00:18:00.696 --rc geninfo_all_blocks=1 00:18:00.696 --rc geninfo_unexecuted_blocks=1 00:18:00.696 00:18:00.696 ' 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:00.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.696 --rc genhtml_branch_coverage=1 00:18:00.696 --rc genhtml_function_coverage=1 00:18:00.696 --rc genhtml_legend=1 00:18:00.696 --rc geninfo_all_blocks=1 00:18:00.696 --rc geninfo_unexecuted_blocks=1 00:18:00.696 00:18:00.696 ' 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:00.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.696 --rc genhtml_branch_coverage=1 00:18:00.696 --rc genhtml_function_coverage=1 00:18:00.696 --rc genhtml_legend=1 00:18:00.696 --rc geninfo_all_blocks=1 00:18:00.696 --rc geninfo_unexecuted_blocks=1 00:18:00.696 00:18:00.696 ' 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:00.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.696 --rc genhtml_branch_coverage=1 00:18:00.696 --rc genhtml_function_coverage=1 00:18:00.696 --rc genhtml_legend=1 00:18:00.696 --rc geninfo_all_blocks=1 00:18:00.696 --rc geninfo_unexecuted_blocks=1 00:18:00.696 00:18:00.696 ' 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:00.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.696 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.697 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:00.697 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:00.697 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:00.697 02:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:02.607 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:02.607 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:02.607 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:02.607 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:02.607 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:02.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:18:02.608 00:18:02.608 --- 10.0.0.2 ping statistics --- 00:18:02.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.608 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:02.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:18:02.608 00:18:02.608 --- 10.0.0.1 ping statistics --- 00:18:02.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.608 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:02.608 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.608 02:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2952311 00:18:02.608 02:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:02.608 02:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2952311 00:18:02.608 02:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2952311 ']' 00:18:02.608 02:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.608 02:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.608 02:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.608 02:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.608 02:38:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.866 [2024-11-17 02:38:11.097797] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:02.866 [2024-11-17 02:38:11.097957] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.866 [2024-11-17 02:38:11.244917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.125 [2024-11-17 02:38:11.366747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.125 [2024-11-17 02:38:11.366828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.125 [2024-11-17 02:38:11.366859] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.125 [2024-11-17 02:38:11.366888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.125 [2024-11-17 02:38:11.366912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.125 [2024-11-17 02:38:11.368459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.691 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.691 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:18:03.691 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:03.691 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:03.691 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:03.691 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.691 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:03.691 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.691 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:03.691 [2024-11-17 02:38:12.146002] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.691 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.691 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:03.691 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.691 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:03.949 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.949 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:03.949 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.949 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:03.949 [2024-11-17 02:38:12.162307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.949 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.949 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:03.949 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.949 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:03.949 NULL1 00:18:03.949 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.949 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:03.949 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.949 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:03.949 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.949 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:03.949 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.949 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:03.949 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.949 02:38:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:03.949 [2024-11-17 02:38:12.231643] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:03.949 [2024-11-17 02:38:12.231755] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2952465 ] 00:18:04.516 Attached to nqn.2016-06.io.spdk:cnode1 00:18:04.516 Namespace ID: 1 size: 1GB 00:18:04.516 fused_ordering(0) 00:18:04.516 fused_ordering(1) 00:18:04.516 fused_ordering(2) 00:18:04.516 fused_ordering(3) 00:18:04.516 fused_ordering(4) 00:18:04.516 fused_ordering(5) 00:18:04.516 fused_ordering(6) 00:18:04.516 fused_ordering(7) 00:18:04.516 fused_ordering(8) 00:18:04.516 fused_ordering(9) 00:18:04.516 fused_ordering(10) 00:18:04.516 fused_ordering(11) 00:18:04.516 fused_ordering(12) 00:18:04.516 fused_ordering(13) 00:18:04.516 fused_ordering(14) 00:18:04.516 fused_ordering(15) 00:18:04.516 fused_ordering(16) 00:18:04.516 fused_ordering(17) 00:18:04.516 fused_ordering(18) 00:18:04.516 fused_ordering(19) 00:18:04.516 fused_ordering(20) 00:18:04.516 fused_ordering(21) 00:18:04.516 fused_ordering(22) 00:18:04.516 fused_ordering(23) 00:18:04.516 fused_ordering(24) 00:18:04.516 fused_ordering(25) 00:18:04.516 fused_ordering(26) 00:18:04.516 fused_ordering(27) 00:18:04.516 fused_ordering(28) 00:18:04.516 fused_ordering(29) 00:18:04.516 fused_ordering(30) 00:18:04.516 fused_ordering(31) 00:18:04.516 fused_ordering(32) 00:18:04.516 fused_ordering(33) 00:18:04.516 fused_ordering(34) 00:18:04.516 fused_ordering(35) 00:18:04.516 fused_ordering(36) 00:18:04.516 fused_ordering(37) 00:18:04.516 fused_ordering(38) 00:18:04.516 fused_ordering(39) 00:18:04.516 fused_ordering(40) 00:18:04.516 fused_ordering(41) 00:18:04.516 fused_ordering(42) 00:18:04.516 fused_ordering(43) 00:18:04.516 fused_ordering(44) 00:18:04.516 fused_ordering(45) 00:18:04.516 fused_ordering(46) 00:18:04.516 fused_ordering(47) 00:18:04.516 fused_ordering(48) 00:18:04.516 fused_ordering(49) 00:18:04.516 fused_ordering(50) 00:18:04.516 fused_ordering(51) 00:18:04.516 fused_ordering(52) 00:18:04.516 fused_ordering(53) 00:18:04.516 fused_ordering(54) 00:18:04.516 fused_ordering(55) 00:18:04.516 fused_ordering(56) 00:18:04.516 fused_ordering(57) 00:18:04.516 fused_ordering(58) 00:18:04.516 fused_ordering(59) 00:18:04.516 fused_ordering(60) 00:18:04.516 fused_ordering(61) 00:18:04.516 fused_ordering(62) 00:18:04.516 fused_ordering(63) 00:18:04.516 fused_ordering(64) 00:18:04.516 fused_ordering(65) 00:18:04.516 fused_ordering(66) 00:18:04.516 fused_ordering(67) 00:18:04.516 fused_ordering(68) 00:18:04.516 fused_ordering(69) 00:18:04.516 fused_ordering(70) 00:18:04.516 fused_ordering(71) 00:18:04.516 fused_ordering(72) 00:18:04.516 fused_ordering(73) 00:18:04.516 fused_ordering(74) 00:18:04.516 fused_ordering(75) 00:18:04.516 fused_ordering(76) 00:18:04.516 fused_ordering(77) 00:18:04.516 fused_ordering(78) 00:18:04.516 fused_ordering(79) 00:18:04.516 fused_ordering(80) 00:18:04.516 fused_ordering(81) 00:18:04.516 fused_ordering(82) 00:18:04.516 fused_ordering(83) 00:18:04.516 fused_ordering(84) 00:18:04.516 fused_ordering(85) 00:18:04.516 fused_ordering(86) 00:18:04.516 fused_ordering(87) 00:18:04.516 fused_ordering(88) 00:18:04.516 fused_ordering(89) 00:18:04.516 fused_ordering(90) 00:18:04.516 fused_ordering(91) 00:18:04.516 fused_ordering(92) 00:18:04.516 fused_ordering(93) 00:18:04.516 fused_ordering(94) 00:18:04.516 fused_ordering(95) 00:18:04.516 fused_ordering(96) 00:18:04.516 fused_ordering(97) 00:18:04.516 fused_ordering(98) 00:18:04.516 fused_ordering(99) 00:18:04.516 fused_ordering(100) 00:18:04.516 fused_ordering(101) 00:18:04.516 fused_ordering(102) 00:18:04.516 fused_ordering(103) 00:18:04.516 fused_ordering(104) 00:18:04.516 fused_ordering(105) 00:18:04.516 fused_ordering(106) 00:18:04.516 fused_ordering(107) 00:18:04.516 fused_ordering(108) 00:18:04.516 fused_ordering(109) 00:18:04.516 fused_ordering(110) 00:18:04.516 fused_ordering(111) 00:18:04.516 fused_ordering(112) 00:18:04.516 fused_ordering(113) 00:18:04.516 fused_ordering(114) 00:18:04.516 fused_ordering(115) 00:18:04.516 fused_ordering(116) 00:18:04.516 fused_ordering(117) 00:18:04.516 fused_ordering(118) 00:18:04.516 fused_ordering(119) 00:18:04.516 fused_ordering(120) 00:18:04.516 fused_ordering(121) 00:18:04.516 fused_ordering(122) 00:18:04.516 fused_ordering(123) 00:18:04.516 fused_ordering(124) 00:18:04.516 fused_ordering(125) 00:18:04.516 fused_ordering(126) 00:18:04.516 fused_ordering(127) 00:18:04.516 fused_ordering(128) 00:18:04.516 fused_ordering(129) 00:18:04.516 fused_ordering(130) 00:18:04.516 fused_ordering(131) 00:18:04.516 fused_ordering(132) 00:18:04.516 fused_ordering(133) 00:18:04.516 fused_ordering(134) 00:18:04.516 fused_ordering(135) 00:18:04.516 fused_ordering(136) 00:18:04.516 fused_ordering(137) 00:18:04.516 fused_ordering(138) 00:18:04.516 fused_ordering(139) 00:18:04.516 fused_ordering(140) 00:18:04.516 fused_ordering(141) 00:18:04.516 fused_ordering(142) 00:18:04.516 fused_ordering(143) 00:18:04.516 fused_ordering(144) 00:18:04.516 fused_ordering(145) 00:18:04.516 fused_ordering(146) 00:18:04.516 fused_ordering(147) 00:18:04.516 fused_ordering(148) 00:18:04.516 fused_ordering(149) 00:18:04.516 fused_ordering(150) 00:18:04.516 fused_ordering(151) 00:18:04.516 fused_ordering(152) 00:18:04.516 fused_ordering(153) 00:18:04.516 fused_ordering(154) 00:18:04.516 fused_ordering(155) 00:18:04.517 fused_ordering(156) 00:18:04.517 fused_ordering(157) 00:18:04.517 fused_ordering(158) 00:18:04.517 fused_ordering(159) 00:18:04.517 fused_ordering(160) 00:18:04.517 fused_ordering(161) 00:18:04.517 fused_ordering(162) 00:18:04.517 fused_ordering(163) 00:18:04.517 fused_ordering(164) 00:18:04.517 fused_ordering(165) 00:18:04.517 fused_ordering(166) 00:18:04.517 fused_ordering(167) 00:18:04.517 fused_ordering(168) 00:18:04.517 fused_ordering(169) 00:18:04.517 fused_ordering(170) 00:18:04.517 fused_ordering(171) 00:18:04.517 fused_ordering(172) 00:18:04.517 fused_ordering(173) 00:18:04.517 fused_ordering(174) 00:18:04.517 fused_ordering(175) 00:18:04.517 fused_ordering(176) 00:18:04.517 fused_ordering(177) 00:18:04.517 fused_ordering(178) 00:18:04.517 fused_ordering(179) 00:18:04.517 fused_ordering(180) 00:18:04.517 fused_ordering(181) 00:18:04.517 fused_ordering(182) 00:18:04.517 fused_ordering(183) 00:18:04.517 fused_ordering(184) 00:18:04.517 fused_ordering(185) 00:18:04.517 fused_ordering(186) 00:18:04.517 fused_ordering(187) 00:18:04.517 fused_ordering(188) 00:18:04.517 fused_ordering(189) 00:18:04.517 fused_ordering(190) 00:18:04.517 fused_ordering(191) 00:18:04.517 fused_ordering(192) 00:18:04.517 fused_ordering(193) 00:18:04.517 fused_ordering(194) 00:18:04.517 fused_ordering(195) 00:18:04.517 fused_ordering(196) 00:18:04.517 fused_ordering(197) 00:18:04.517 fused_ordering(198) 00:18:04.517 fused_ordering(199) 00:18:04.517 fused_ordering(200) 00:18:04.517 fused_ordering(201) 00:18:04.517 fused_ordering(202) 00:18:04.517 fused_ordering(203) 00:18:04.517 fused_ordering(204) 00:18:04.517 fused_ordering(205) 00:18:05.083 fused_ordering(206) 00:18:05.083 fused_ordering(207) 00:18:05.083 fused_ordering(208) 00:18:05.083 fused_ordering(209) 00:18:05.083 fused_ordering(210) 00:18:05.083 fused_ordering(211) 00:18:05.083 fused_ordering(212) 00:18:05.083 fused_ordering(213) 00:18:05.083 fused_ordering(214) 00:18:05.083 fused_ordering(215) 00:18:05.083 fused_ordering(216) 00:18:05.083 fused_ordering(217) 00:18:05.083 fused_ordering(218) 00:18:05.083 fused_ordering(219) 00:18:05.083 fused_ordering(220) 00:18:05.083 fused_ordering(221) 00:18:05.083 fused_ordering(222) 00:18:05.083 fused_ordering(223) 00:18:05.083 fused_ordering(224) 00:18:05.083 fused_ordering(225) 00:18:05.083 fused_ordering(226) 00:18:05.083 fused_ordering(227) 00:18:05.083 fused_ordering(228) 00:18:05.083 fused_ordering(229) 00:18:05.083 fused_ordering(230) 00:18:05.083 fused_ordering(231) 00:18:05.083 fused_ordering(232) 00:18:05.083 fused_ordering(233) 00:18:05.083 fused_ordering(234) 00:18:05.083 fused_ordering(235) 00:18:05.083 fused_ordering(236) 00:18:05.083 fused_ordering(237) 00:18:05.083 fused_ordering(238) 00:18:05.083 fused_ordering(239) 00:18:05.083 fused_ordering(240) 00:18:05.083 fused_ordering(241) 00:18:05.083 fused_ordering(242) 00:18:05.083 fused_ordering(243) 00:18:05.083 fused_ordering(244) 00:18:05.083 fused_ordering(245) 00:18:05.083 fused_ordering(246) 00:18:05.083 fused_ordering(247) 00:18:05.083 fused_ordering(248) 00:18:05.083 fused_ordering(249) 00:18:05.083 fused_ordering(250) 00:18:05.083 fused_ordering(251) 00:18:05.083 fused_ordering(252) 00:18:05.083 fused_ordering(253) 00:18:05.083 fused_ordering(254) 00:18:05.083 fused_ordering(255) 00:18:05.083 fused_ordering(256) 00:18:05.083 fused_ordering(257) 00:18:05.083 fused_ordering(258) 00:18:05.083 fused_ordering(259) 00:18:05.083 fused_ordering(260) 00:18:05.083 fused_ordering(261) 00:18:05.083 fused_ordering(262) 00:18:05.083 fused_ordering(263) 00:18:05.083 fused_ordering(264) 00:18:05.083 fused_ordering(265) 00:18:05.083 fused_ordering(266) 00:18:05.083 fused_ordering(267) 00:18:05.083 fused_ordering(268) 00:18:05.083 fused_ordering(269) 00:18:05.083 fused_ordering(270) 00:18:05.083 fused_ordering(271) 00:18:05.083 fused_ordering(272) 00:18:05.083 fused_ordering(273) 00:18:05.083 fused_ordering(274) 00:18:05.083 fused_ordering(275) 00:18:05.083 fused_ordering(276) 00:18:05.083 fused_ordering(277) 00:18:05.083 fused_ordering(278) 00:18:05.083 fused_ordering(279) 00:18:05.083 fused_ordering(280) 00:18:05.083 fused_ordering(281) 00:18:05.083 fused_ordering(282) 00:18:05.083 fused_ordering(283) 00:18:05.083 fused_ordering(284) 00:18:05.083 fused_ordering(285) 00:18:05.083 fused_ordering(286) 00:18:05.083 fused_ordering(287) 00:18:05.083 fused_ordering(288) 00:18:05.083 fused_ordering(289) 00:18:05.083 fused_ordering(290) 00:18:05.083 fused_ordering(291) 00:18:05.083 fused_ordering(292) 00:18:05.083 fused_ordering(293) 00:18:05.083 fused_ordering(294) 00:18:05.083 fused_ordering(295) 00:18:05.083 fused_ordering(296) 00:18:05.083 fused_ordering(297) 00:18:05.083 fused_ordering(298) 00:18:05.083 fused_ordering(299) 00:18:05.083 fused_ordering(300) 00:18:05.083 fused_ordering(301) 00:18:05.083 fused_ordering(302) 00:18:05.083 fused_ordering(303) 00:18:05.083 fused_ordering(304) 00:18:05.083 fused_ordering(305) 00:18:05.083 fused_ordering(306) 00:18:05.083 fused_ordering(307) 00:18:05.083 fused_ordering(308) 00:18:05.083 fused_ordering(309) 00:18:05.083 fused_ordering(310) 00:18:05.083 fused_ordering(311) 00:18:05.083 fused_ordering(312) 00:18:05.083 fused_ordering(313) 00:18:05.083 fused_ordering(314) 00:18:05.083 fused_ordering(315) 00:18:05.083 fused_ordering(316) 00:18:05.083 fused_ordering(317) 00:18:05.083 fused_ordering(318) 00:18:05.083 fused_ordering(319) 00:18:05.083 fused_ordering(320) 00:18:05.083 fused_ordering(321) 00:18:05.083 fused_ordering(322) 00:18:05.083 fused_ordering(323) 00:18:05.083 fused_ordering(324) 00:18:05.083 fused_ordering(325) 00:18:05.083 fused_ordering(326) 00:18:05.083 fused_ordering(327) 00:18:05.083 fused_ordering(328) 00:18:05.083 fused_ordering(329) 00:18:05.083 fused_ordering(330) 00:18:05.083 fused_ordering(331) 00:18:05.083 fused_ordering(332) 00:18:05.083 fused_ordering(333) 00:18:05.083 fused_ordering(334) 00:18:05.083 fused_ordering(335) 00:18:05.083 fused_ordering(336) 00:18:05.083 fused_ordering(337) 00:18:05.083 fused_ordering(338) 00:18:05.083 fused_ordering(339) 00:18:05.083 fused_ordering(340) 00:18:05.084 fused_ordering(341) 00:18:05.084 fused_ordering(342) 00:18:05.084 fused_ordering(343) 00:18:05.084 fused_ordering(344) 00:18:05.084 fused_ordering(345) 00:18:05.084 fused_ordering(346) 00:18:05.084 fused_ordering(347) 00:18:05.084 fused_ordering(348) 00:18:05.084 fused_ordering(349) 00:18:05.084 fused_ordering(350) 00:18:05.084 fused_ordering(351) 00:18:05.084 fused_ordering(352) 00:18:05.084 fused_ordering(353) 00:18:05.084 fused_ordering(354) 00:18:05.084 fused_ordering(355) 00:18:05.084 fused_ordering(356) 00:18:05.084 fused_ordering(357) 00:18:05.084 fused_ordering(358) 00:18:05.084 fused_ordering(359) 00:18:05.084 fused_ordering(360) 00:18:05.084 fused_ordering(361) 00:18:05.084 fused_ordering(362) 00:18:05.084 fused_ordering(363) 00:18:05.084 fused_ordering(364) 00:18:05.084 fused_ordering(365) 00:18:05.084 fused_ordering(366) 00:18:05.084 fused_ordering(367) 00:18:05.084 fused_ordering(368) 00:18:05.084 fused_ordering(369) 00:18:05.084 fused_ordering(370) 00:18:05.084 fused_ordering(371) 00:18:05.084 fused_ordering(372) 00:18:05.084 fused_ordering(373) 00:18:05.084 fused_ordering(374) 00:18:05.084 fused_ordering(375) 00:18:05.084 fused_ordering(376) 00:18:05.084 fused_ordering(377) 00:18:05.084 fused_ordering(378) 00:18:05.084 fused_ordering(379) 00:18:05.084 fused_ordering(380) 00:18:05.084 fused_ordering(381) 00:18:05.084 fused_ordering(382) 00:18:05.084 fused_ordering(383) 00:18:05.084 fused_ordering(384) 00:18:05.084 fused_ordering(385) 00:18:05.084 fused_ordering(386) 00:18:05.084 fused_ordering(387) 00:18:05.084 fused_ordering(388) 00:18:05.084 fused_ordering(389) 00:18:05.084 fused_ordering(390) 00:18:05.084 fused_ordering(391) 00:18:05.084 fused_ordering(392) 00:18:05.084 fused_ordering(393) 00:18:05.084 fused_ordering(394) 00:18:05.084 fused_ordering(395) 00:18:05.084 fused_ordering(396) 00:18:05.084 fused_ordering(397) 00:18:05.084 fused_ordering(398) 00:18:05.084 fused_ordering(399) 00:18:05.084 fused_ordering(400) 00:18:05.084 fused_ordering(401) 00:18:05.084 fused_ordering(402) 00:18:05.084 fused_ordering(403) 00:18:05.084 fused_ordering(404) 00:18:05.084 fused_ordering(405) 00:18:05.084 fused_ordering(406) 00:18:05.084 fused_ordering(407) 00:18:05.084 fused_ordering(408) 00:18:05.084 fused_ordering(409) 00:18:05.084 fused_ordering(410) 00:18:05.650 fused_ordering(411) 00:18:05.650 fused_ordering(412) 00:18:05.650 fused_ordering(413) 00:18:05.650 fused_ordering(414) 00:18:05.650 fused_ordering(415) 00:18:05.650 fused_ordering(416) 00:18:05.650 fused_ordering(417) 00:18:05.650 fused_ordering(418) 00:18:05.650 fused_ordering(419) 00:18:05.650 fused_ordering(420) 00:18:05.650 fused_ordering(421) 00:18:05.650 fused_ordering(422) 00:18:05.650 fused_ordering(423) 00:18:05.650 fused_ordering(424) 00:18:05.650 fused_ordering(425) 00:18:05.650 fused_ordering(426) 00:18:05.650 fused_ordering(427) 00:18:05.650 fused_ordering(428) 00:18:05.650 fused_ordering(429) 00:18:05.650 fused_ordering(430) 00:18:05.650 fused_ordering(431) 00:18:05.650 fused_ordering(432) 00:18:05.650 fused_ordering(433) 00:18:05.650 fused_ordering(434) 00:18:05.650 fused_ordering(435) 00:18:05.650 fused_ordering(436) 00:18:05.650 fused_ordering(437) 00:18:05.650 fused_ordering(438) 00:18:05.650 fused_ordering(439) 00:18:05.650 fused_ordering(440) 00:18:05.650 fused_ordering(441) 00:18:05.650 fused_ordering(442) 00:18:05.650 fused_ordering(443) 00:18:05.650 fused_ordering(444) 00:18:05.650 fused_ordering(445) 00:18:05.650 fused_ordering(446) 00:18:05.650 fused_ordering(447) 00:18:05.650 fused_ordering(448) 00:18:05.650 fused_ordering(449) 00:18:05.650 fused_ordering(450) 00:18:05.650 fused_ordering(451) 00:18:05.650 fused_ordering(452) 00:18:05.650 fused_ordering(453) 00:18:05.650 fused_ordering(454) 00:18:05.650 fused_ordering(455) 00:18:05.650 fused_ordering(456) 00:18:05.650 fused_ordering(457) 00:18:05.650 fused_ordering(458) 00:18:05.650 fused_ordering(459) 00:18:05.650 fused_ordering(460) 00:18:05.650 fused_ordering(461) 00:18:05.650 fused_ordering(462) 00:18:05.650 fused_ordering(463) 00:18:05.650 fused_ordering(464) 00:18:05.650 fused_ordering(465) 00:18:05.650 fused_ordering(466) 00:18:05.650 fused_ordering(467) 00:18:05.650 fused_ordering(468) 00:18:05.650 fused_ordering(469) 00:18:05.650 fused_ordering(470) 00:18:05.650 fused_ordering(471) 00:18:05.650 fused_ordering(472) 00:18:05.650 fused_ordering(473) 00:18:05.650 fused_ordering(474) 00:18:05.650 fused_ordering(475) 00:18:05.650 fused_ordering(476) 00:18:05.650 fused_ordering(477) 00:18:05.650 fused_ordering(478) 00:18:05.650 fused_ordering(479) 00:18:05.650 fused_ordering(480) 00:18:05.650 fused_ordering(481) 00:18:05.650 fused_ordering(482) 00:18:05.650 fused_ordering(483) 00:18:05.650 fused_ordering(484) 00:18:05.651 fused_ordering(485) 00:18:05.651 fused_ordering(486) 00:18:05.651 fused_ordering(487) 00:18:05.651 fused_ordering(488) 00:18:05.651 fused_ordering(489) 00:18:05.651 fused_ordering(490) 00:18:05.651 fused_ordering(491) 00:18:05.651 fused_ordering(492) 00:18:05.651 fused_ordering(493) 00:18:05.651 fused_ordering(494) 00:18:05.651 fused_ordering(495) 00:18:05.651 fused_ordering(496) 00:18:05.651 fused_ordering(497) 00:18:05.651 fused_ordering(498) 00:18:05.651 fused_ordering(499) 00:18:05.651 fused_ordering(500) 00:18:05.651 fused_ordering(501) 00:18:05.651 fused_ordering(502) 00:18:05.651 fused_ordering(503) 00:18:05.651 fused_ordering(504) 00:18:05.651 fused_ordering(505) 00:18:05.651 fused_ordering(506) 00:18:05.651 fused_ordering(507) 00:18:05.651 fused_ordering(508) 00:18:05.651 fused_ordering(509) 00:18:05.651 fused_ordering(510) 00:18:05.651 fused_ordering(511) 00:18:05.651 fused_ordering(512) 00:18:05.651 fused_ordering(513) 00:18:05.651 fused_ordering(514) 00:18:05.651 fused_ordering(515) 00:18:05.651 fused_ordering(516) 00:18:05.651 fused_ordering(517) 00:18:05.651 fused_ordering(518) 00:18:05.651 fused_ordering(519) 00:18:05.651 fused_ordering(520) 00:18:05.651 fused_ordering(521) 00:18:05.651 fused_ordering(522) 00:18:05.651 fused_ordering(523) 00:18:05.651 fused_ordering(524) 00:18:05.651 fused_ordering(525) 00:18:05.651 fused_ordering(526) 00:18:05.651 fused_ordering(527) 00:18:05.651 fused_ordering(528) 00:18:05.651 fused_ordering(529) 00:18:05.651 fused_ordering(530) 00:18:05.651 fused_ordering(531) 00:18:05.651 fused_ordering(532) 00:18:05.651 fused_ordering(533) 00:18:05.651 fused_ordering(534) 00:18:05.651 fused_ordering(535) 00:18:05.651 fused_ordering(536) 00:18:05.651 fused_ordering(537) 00:18:05.651 fused_ordering(538) 00:18:05.651 fused_ordering(539) 00:18:05.651 fused_ordering(540) 00:18:05.651 fused_ordering(541) 00:18:05.651 fused_ordering(542) 00:18:05.651 fused_ordering(543) 00:18:05.651 fused_ordering(544) 00:18:05.651 fused_ordering(545) 00:18:05.651 fused_ordering(546) 00:18:05.651 fused_ordering(547) 00:18:05.651 fused_ordering(548) 00:18:05.651 fused_ordering(549) 00:18:05.651 fused_ordering(550) 00:18:05.651 fused_ordering(551) 00:18:05.651 fused_ordering(552) 00:18:05.651 fused_ordering(553) 00:18:05.651 fused_ordering(554) 00:18:05.651 fused_ordering(555) 00:18:05.651 fused_ordering(556) 00:18:05.651 fused_ordering(557) 00:18:05.651 fused_ordering(558) 00:18:05.651 fused_ordering(559) 00:18:05.651 fused_ordering(560) 00:18:05.651 fused_ordering(561) 00:18:05.651 fused_ordering(562) 00:18:05.651 fused_ordering(563) 00:18:05.651 fused_ordering(564) 00:18:05.651 fused_ordering(565) 00:18:05.651 fused_ordering(566) 00:18:05.651 fused_ordering(567) 00:18:05.651 fused_ordering(568) 00:18:05.651 fused_ordering(569) 00:18:05.651 fused_ordering(570) 00:18:05.651 fused_ordering(571) 00:18:05.651 fused_ordering(572) 00:18:05.651 fused_ordering(573) 00:18:05.651 fused_ordering(574) 00:18:05.651 fused_ordering(575) 00:18:05.651 fused_ordering(576) 00:18:05.651 fused_ordering(577) 00:18:05.651 fused_ordering(578) 00:18:05.651 fused_ordering(579) 00:18:05.651 fused_ordering(580) 00:18:05.651 fused_ordering(581) 00:18:05.651 fused_ordering(582) 00:18:05.651 fused_ordering(583) 00:18:05.651 fused_ordering(584) 00:18:05.651 fused_ordering(585) 00:18:05.651 fused_ordering(586) 00:18:05.651 fused_ordering(587) 00:18:05.651 fused_ordering(588) 00:18:05.651 fused_ordering(589) 00:18:05.651 fused_ordering(590) 00:18:05.651 fused_ordering(591) 00:18:05.651 fused_ordering(592) 00:18:05.651 fused_ordering(593) 00:18:05.651 fused_ordering(594) 00:18:05.651 fused_ordering(595) 00:18:05.651 fused_ordering(596) 00:18:05.651 fused_ordering(597) 00:18:05.651 fused_ordering(598) 00:18:05.651 fused_ordering(599) 00:18:05.651 fused_ordering(600) 00:18:05.651 fused_ordering(601) 00:18:05.651 fused_ordering(602) 00:18:05.651 fused_ordering(603) 00:18:05.651 fused_ordering(604) 00:18:05.651 fused_ordering(605) 00:18:05.651 fused_ordering(606) 00:18:05.651 fused_ordering(607) 00:18:05.651 fused_ordering(608) 00:18:05.651 fused_ordering(609) 00:18:05.651 fused_ordering(610) 00:18:05.651 fused_ordering(611) 00:18:05.651 fused_ordering(612) 00:18:05.651 fused_ordering(613) 00:18:05.651 fused_ordering(614) 00:18:05.651 fused_ordering(615) 00:18:06.218 fused_ordering(616) 00:18:06.218 fused_ordering(617) 00:18:06.218 fused_ordering(618) 00:18:06.218 fused_ordering(619) 00:18:06.218 fused_ordering(620) 00:18:06.218 fused_ordering(621) 00:18:06.218 fused_ordering(622) 00:18:06.218 fused_ordering(623) 00:18:06.218 fused_ordering(624) 00:18:06.218 fused_ordering(625) 00:18:06.218 fused_ordering(626) 00:18:06.218 fused_ordering(627) 00:18:06.218 fused_ordering(628) 00:18:06.218 fused_ordering(629) 00:18:06.218 fused_ordering(630) 00:18:06.218 fused_ordering(631) 00:18:06.218 fused_ordering(632) 00:18:06.218 fused_ordering(633) 00:18:06.218 fused_ordering(634) 00:18:06.218 fused_ordering(635) 00:18:06.218 fused_ordering(636) 00:18:06.218 fused_ordering(637) 00:18:06.218 fused_ordering(638) 00:18:06.218 fused_ordering(639) 00:18:06.218 fused_ordering(640) 00:18:06.218 fused_ordering(641) 00:18:06.218 fused_ordering(642) 00:18:06.218 fused_ordering(643) 00:18:06.218 fused_ordering(644) 00:18:06.218 fused_ordering(645) 00:18:06.218 fused_ordering(646) 00:18:06.218 fused_ordering(647) 00:18:06.218 fused_ordering(648) 00:18:06.218 fused_ordering(649) 00:18:06.218 fused_ordering(650) 00:18:06.218 fused_ordering(651) 00:18:06.218 fused_ordering(652) 00:18:06.218 fused_ordering(653) 00:18:06.218 fused_ordering(654) 00:18:06.218 fused_ordering(655) 00:18:06.218 fused_ordering(656) 00:18:06.218 fused_ordering(657) 00:18:06.218 fused_ordering(658) 00:18:06.218 fused_ordering(659) 00:18:06.218 fused_ordering(660) 00:18:06.218 fused_ordering(661) 00:18:06.218 fused_ordering(662) 00:18:06.218 fused_ordering(663) 00:18:06.218 fused_ordering(664) 00:18:06.218 fused_ordering(665) 00:18:06.218 fused_ordering(666) 00:18:06.218 fused_ordering(667) 00:18:06.218 fused_ordering(668) 00:18:06.218 fused_ordering(669) 00:18:06.218 fused_ordering(670) 00:18:06.218 fused_ordering(671) 00:18:06.218 fused_ordering(672) 00:18:06.218 fused_ordering(673) 00:18:06.218 fused_ordering(674) 00:18:06.218 fused_ordering(675) 00:18:06.218 fused_ordering(676) 00:18:06.218 fused_ordering(677) 00:18:06.218 fused_ordering(678) 00:18:06.218 fused_ordering(679) 00:18:06.218 fused_ordering(680) 00:18:06.218 fused_ordering(681) 00:18:06.218 fused_ordering(682) 00:18:06.218 fused_ordering(683) 00:18:06.218 fused_ordering(684) 00:18:06.218 fused_ordering(685) 00:18:06.218 fused_ordering(686) 00:18:06.218 fused_ordering(687) 00:18:06.218 fused_ordering(688) 00:18:06.218 fused_ordering(689) 00:18:06.218 fused_ordering(690) 00:18:06.218 fused_ordering(691) 00:18:06.218 fused_ordering(692) 00:18:06.218 fused_ordering(693) 00:18:06.218 fused_ordering(694) 00:18:06.218 fused_ordering(695) 00:18:06.218 fused_ordering(696) 00:18:06.218 fused_ordering(697) 00:18:06.218 fused_ordering(698) 00:18:06.218 fused_ordering(699) 00:18:06.218 fused_ordering(700) 00:18:06.218 fused_ordering(701) 00:18:06.218 fused_ordering(702) 00:18:06.218 fused_ordering(703) 00:18:06.218 fused_ordering(704) 00:18:06.218 fused_ordering(705) 00:18:06.218 fused_ordering(706) 00:18:06.218 fused_ordering(707) 00:18:06.218 fused_ordering(708) 00:18:06.218 fused_ordering(709) 00:18:06.218 fused_ordering(710) 00:18:06.218 fused_ordering(711) 00:18:06.218 fused_ordering(712) 00:18:06.218 fused_ordering(713) 00:18:06.218 fused_ordering(714) 00:18:06.218 fused_ordering(715) 00:18:06.218 fused_ordering(716) 00:18:06.218 fused_ordering(717) 00:18:06.218 fused_ordering(718) 00:18:06.218 fused_ordering(719) 00:18:06.218 fused_ordering(720) 00:18:06.218 fused_ordering(721) 00:18:06.218 fused_ordering(722) 00:18:06.218 fused_ordering(723) 00:18:06.218 fused_ordering(724) 00:18:06.218 fused_ordering(725) 00:18:06.218 fused_ordering(726) 00:18:06.218 fused_ordering(727) 00:18:06.218 fused_ordering(728) 00:18:06.218 fused_ordering(729) 00:18:06.218 fused_ordering(730) 00:18:06.218 fused_ordering(731) 00:18:06.218 fused_ordering(732) 00:18:06.218 fused_ordering(733) 00:18:06.218 fused_ordering(734) 00:18:06.218 fused_ordering(735) 00:18:06.218 fused_ordering(736) 00:18:06.218 fused_ordering(737) 00:18:06.218 fused_ordering(738) 00:18:06.218 fused_ordering(739) 00:18:06.218 fused_ordering(740) 00:18:06.218 fused_ordering(741) 00:18:06.218 fused_ordering(742) 00:18:06.218 fused_ordering(743) 00:18:06.218 fused_ordering(744) 00:18:06.218 fused_ordering(745) 00:18:06.218 fused_ordering(746) 00:18:06.218 fused_ordering(747) 00:18:06.218 fused_ordering(748) 00:18:06.218 fused_ordering(749) 00:18:06.218 fused_ordering(750) 00:18:06.218 fused_ordering(751) 00:18:06.218 fused_ordering(752) 00:18:06.218 fused_ordering(753) 00:18:06.218 fused_ordering(754) 00:18:06.218 fused_ordering(755) 00:18:06.218 fused_ordering(756) 00:18:06.218 fused_ordering(757) 00:18:06.218 fused_ordering(758) 00:18:06.218 fused_ordering(759) 00:18:06.218 fused_ordering(760) 00:18:06.218 fused_ordering(761) 00:18:06.218 fused_ordering(762) 00:18:06.218 fused_ordering(763) 00:18:06.218 fused_ordering(764) 00:18:06.218 fused_ordering(765) 00:18:06.218 fused_ordering(766) 00:18:06.218 fused_ordering(767) 00:18:06.218 fused_ordering(768) 00:18:06.218 fused_ordering(769) 00:18:06.218 fused_ordering(770) 00:18:06.218 fused_ordering(771) 00:18:06.218 fused_ordering(772) 00:18:06.218 fused_ordering(773) 00:18:06.218 fused_ordering(774) 00:18:06.218 fused_ordering(775) 00:18:06.218 fused_ordering(776) 00:18:06.218 fused_ordering(777) 00:18:06.218 fused_ordering(778) 00:18:06.218 fused_ordering(779) 00:18:06.218 fused_ordering(780) 00:18:06.218 fused_ordering(781) 00:18:06.218 fused_ordering(782) 00:18:06.218 fused_ordering(783) 00:18:06.218 fused_ordering(784) 00:18:06.218 fused_ordering(785) 00:18:06.218 fused_ordering(786) 00:18:06.218 fused_ordering(787) 00:18:06.218 fused_ordering(788) 00:18:06.218 fused_ordering(789) 00:18:06.218 fused_ordering(790) 00:18:06.218 fused_ordering(791) 00:18:06.218 fused_ordering(792) 00:18:06.218 fused_ordering(793) 00:18:06.218 fused_ordering(794) 00:18:06.218 fused_ordering(795) 00:18:06.218 fused_ordering(796) 00:18:06.218 fused_ordering(797) 00:18:06.218 fused_ordering(798) 00:18:06.218 fused_ordering(799) 00:18:06.218 fused_ordering(800) 00:18:06.218 fused_ordering(801) 00:18:06.218 fused_ordering(802) 00:18:06.218 fused_ordering(803) 00:18:06.218 fused_ordering(804) 00:18:06.218 fused_ordering(805) 00:18:06.218 fused_ordering(806) 00:18:06.218 fused_ordering(807) 00:18:06.218 fused_ordering(808) 00:18:06.218 fused_ordering(809) 00:18:06.218 fused_ordering(810) 00:18:06.218 fused_ordering(811) 00:18:06.218 fused_ordering(812) 00:18:06.218 fused_ordering(813) 00:18:06.218 fused_ordering(814) 00:18:06.218 fused_ordering(815) 00:18:06.218 fused_ordering(816) 00:18:06.218 fused_ordering(817) 00:18:06.218 fused_ordering(818) 00:18:06.218 fused_ordering(819) 00:18:06.218 fused_ordering(820) 00:18:07.153 fused_ordering(821) 00:18:07.153 fused_ordering(822) 00:18:07.153 fused_ordering(823) 00:18:07.153 fused_ordering(824) 00:18:07.153 fused_ordering(825) 00:18:07.153 fused_ordering(826) 00:18:07.154 fused_ordering(827) 00:18:07.154 fused_ordering(828) 00:18:07.154 fused_ordering(829) 00:18:07.154 fused_ordering(830) 00:18:07.154 fused_ordering(831) 00:18:07.154 fused_ordering(832) 00:18:07.154 fused_ordering(833) 00:18:07.154 fused_ordering(834) 00:18:07.154 fused_ordering(835) 00:18:07.154 fused_ordering(836) 00:18:07.154 fused_ordering(837) 00:18:07.154 fused_ordering(838) 00:18:07.154 fused_ordering(839) 00:18:07.154 fused_ordering(840) 00:18:07.154 fused_ordering(841) 00:18:07.154 fused_ordering(842) 00:18:07.154 fused_ordering(843) 00:18:07.154 fused_ordering(844) 00:18:07.154 fused_ordering(845) 00:18:07.154 fused_ordering(846) 00:18:07.154 fused_ordering(847) 00:18:07.154 fused_ordering(848) 00:18:07.154 fused_ordering(849) 00:18:07.154 fused_ordering(850) 00:18:07.154 fused_ordering(851) 00:18:07.154 fused_ordering(852) 00:18:07.154 fused_ordering(853) 00:18:07.154 fused_ordering(854) 00:18:07.154 fused_ordering(855) 00:18:07.154 fused_ordering(856) 00:18:07.154 fused_ordering(857) 00:18:07.154 fused_ordering(858) 00:18:07.154 fused_ordering(859) 00:18:07.154 fused_ordering(860) 00:18:07.154 fused_ordering(861) 00:18:07.154 fused_ordering(862) 00:18:07.154 fused_ordering(863) 00:18:07.154 fused_ordering(864) 00:18:07.154 fused_ordering(865) 00:18:07.154 fused_ordering(866) 00:18:07.154 fused_ordering(867) 00:18:07.154 fused_ordering(868) 00:18:07.154 fused_ordering(869) 00:18:07.154 fused_ordering(870) 00:18:07.154 fused_ordering(871) 00:18:07.154 fused_ordering(872) 00:18:07.154 fused_ordering(873) 00:18:07.154 fused_ordering(874) 00:18:07.154 fused_ordering(875) 00:18:07.154 fused_ordering(876) 00:18:07.154 fused_ordering(877) 00:18:07.154 fused_ordering(878) 00:18:07.154 fused_ordering(879) 00:18:07.154 fused_ordering(880) 00:18:07.154 fused_ordering(881) 00:18:07.154 fused_ordering(882) 00:18:07.154 fused_ordering(883) 00:18:07.154 fused_ordering(884) 00:18:07.154 fused_ordering(885) 00:18:07.154 fused_ordering(886) 00:18:07.154 fused_ordering(887) 00:18:07.154 fused_ordering(888) 00:18:07.154 fused_ordering(889) 00:18:07.154 fused_ordering(890) 00:18:07.154 fused_ordering(891) 00:18:07.154 fused_ordering(892) 00:18:07.154 fused_ordering(893) 00:18:07.154 fused_ordering(894) 00:18:07.154 fused_ordering(895) 00:18:07.154 fused_ordering(896) 00:18:07.154 fused_ordering(897) 00:18:07.154 fused_ordering(898) 00:18:07.154 fused_ordering(899) 00:18:07.154 fused_ordering(900) 00:18:07.154 fused_ordering(901) 00:18:07.154 fused_ordering(902) 00:18:07.154 fused_ordering(903) 00:18:07.154 fused_ordering(904) 00:18:07.154 fused_ordering(905) 00:18:07.154 fused_ordering(906) 00:18:07.154 fused_ordering(907) 00:18:07.154 fused_ordering(908) 00:18:07.154 fused_ordering(909) 00:18:07.154 fused_ordering(910) 00:18:07.154 fused_ordering(911) 00:18:07.154 fused_ordering(912) 00:18:07.154 fused_ordering(913) 00:18:07.154 fused_ordering(914) 00:18:07.154 fused_ordering(915) 00:18:07.154 fused_ordering(916) 00:18:07.154 fused_ordering(917) 00:18:07.154 fused_ordering(918) 00:18:07.154 fused_ordering(919) 00:18:07.154 fused_ordering(920) 00:18:07.154 fused_ordering(921) 00:18:07.154 fused_ordering(922) 00:18:07.154 fused_ordering(923) 00:18:07.154 fused_ordering(924) 00:18:07.154 fused_ordering(925) 00:18:07.154 fused_ordering(926) 00:18:07.154 fused_ordering(927) 00:18:07.154 fused_ordering(928) 00:18:07.154 fused_ordering(929) 00:18:07.154 fused_ordering(930) 00:18:07.154 fused_ordering(931) 00:18:07.154 fused_ordering(932) 00:18:07.154 fused_ordering(933) 00:18:07.154 fused_ordering(934) 00:18:07.154 fused_ordering(935) 00:18:07.154 fused_ordering(936) 00:18:07.154 fused_ordering(937) 00:18:07.154 fused_ordering(938) 00:18:07.154 fused_ordering(939) 00:18:07.154 fused_ordering(940) 00:18:07.154 fused_ordering(941) 00:18:07.154 fused_ordering(942) 00:18:07.154 fused_ordering(943) 00:18:07.154 fused_ordering(944) 00:18:07.154 fused_ordering(945) 00:18:07.154 fused_ordering(946) 00:18:07.154 fused_ordering(947) 00:18:07.154 fused_ordering(948) 00:18:07.154 fused_ordering(949) 00:18:07.154 fused_ordering(950) 00:18:07.154 fused_ordering(951) 00:18:07.154 fused_ordering(952) 00:18:07.154 fused_ordering(953) 00:18:07.154 fused_ordering(954) 00:18:07.154 fused_ordering(955) 00:18:07.154 fused_ordering(956) 00:18:07.154 fused_ordering(957) 00:18:07.154 fused_ordering(958) 00:18:07.154 fused_ordering(959) 00:18:07.154 fused_ordering(960) 00:18:07.154 fused_ordering(961) 00:18:07.154 fused_ordering(962) 00:18:07.154 fused_ordering(963) 00:18:07.154 fused_ordering(964) 00:18:07.154 fused_ordering(965) 00:18:07.154 fused_ordering(966) 00:18:07.154 fused_ordering(967) 00:18:07.154 fused_ordering(968) 00:18:07.154 fused_ordering(969) 00:18:07.154 fused_ordering(970) 00:18:07.154 fused_ordering(971) 00:18:07.154 fused_ordering(972) 00:18:07.154 fused_ordering(973) 00:18:07.154 fused_ordering(974) 00:18:07.154 fused_ordering(975) 00:18:07.154 fused_ordering(976) 00:18:07.154 fused_ordering(977) 00:18:07.154 fused_ordering(978) 00:18:07.154 fused_ordering(979) 00:18:07.154 fused_ordering(980) 00:18:07.154 fused_ordering(981) 00:18:07.154 fused_ordering(982) 00:18:07.154 fused_ordering(983) 00:18:07.154 fused_ordering(984) 00:18:07.154 fused_ordering(985) 00:18:07.154 fused_ordering(986) 00:18:07.154 fused_ordering(987) 00:18:07.154 fused_ordering(988) 00:18:07.154 fused_ordering(989) 00:18:07.154 fused_ordering(990) 00:18:07.154 fused_ordering(991) 00:18:07.154 fused_ordering(992) 00:18:07.154 fused_ordering(993) 00:18:07.154 fused_ordering(994) 00:18:07.154 fused_ordering(995) 00:18:07.154 fused_ordering(996) 00:18:07.154 fused_ordering(997) 00:18:07.154 fused_ordering(998) 00:18:07.154 fused_ordering(999) 00:18:07.154 fused_ordering(1000) 00:18:07.154 fused_ordering(1001) 00:18:07.154 fused_ordering(1002) 00:18:07.154 fused_ordering(1003) 00:18:07.154 fused_ordering(1004) 00:18:07.154 fused_ordering(1005) 00:18:07.154 fused_ordering(1006) 00:18:07.154 fused_ordering(1007) 00:18:07.154 fused_ordering(1008) 00:18:07.154 fused_ordering(1009) 00:18:07.154 fused_ordering(1010) 00:18:07.154 fused_ordering(1011) 00:18:07.154 fused_ordering(1012) 00:18:07.154 fused_ordering(1013) 00:18:07.154 fused_ordering(1014) 00:18:07.154 fused_ordering(1015) 00:18:07.154 fused_ordering(1016) 00:18:07.154 fused_ordering(1017) 00:18:07.154 fused_ordering(1018) 00:18:07.154 fused_ordering(1019) 00:18:07.154 fused_ordering(1020) 00:18:07.154 fused_ordering(1021) 00:18:07.154 fused_ordering(1022) 00:18:07.154 fused_ordering(1023) 00:18:07.154 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:07.154 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:07.154 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:07.154 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:07.154 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:07.154 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:07.154 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:07.154 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:07.154 rmmod nvme_tcp 00:18:07.154 rmmod nvme_fabrics 00:18:07.154 rmmod nvme_keyring 00:18:07.413 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:07.413 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:07.413 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:07.413 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2952311 ']' 00:18:07.413 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2952311 00:18:07.413 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2952311 ']' 00:18:07.413 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2952311 00:18:07.413 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:18:07.413 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.413 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2952311 00:18:07.413 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:07.413 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:07.413 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2952311' 00:18:07.413 killing process with pid 2952311 00:18:07.413 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2952311 00:18:07.413 02:38:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2952311 00:18:08.789 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:08.789 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:08.789 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:08.789 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:08.789 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:18:08.789 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:08.789 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:18:08.789 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:08.789 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:08.789 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.789 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:08.789 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.694 02:38:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:10.694 00:18:10.694 real 0m10.255s 00:18:10.694 user 0m8.722s 00:18:10.694 sys 0m3.561s 00:18:10.694 02:38:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:10.694 02:38:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:10.694 ************************************ 00:18:10.694 END TEST nvmf_fused_ordering 00:18:10.694 ************************************ 00:18:10.694 02:38:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:10.694 02:38:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:10.694 02:38:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:10.694 02:38:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:10.694 ************************************ 00:18:10.694 START TEST nvmf_ns_masking 00:18:10.694 ************************************ 00:18:10.694 02:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:10.694 * Looking for test storage... 00:18:10.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:10.694 02:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:10.694 02:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:18:10.694 02:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:10.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.694 --rc genhtml_branch_coverage=1 00:18:10.694 --rc genhtml_function_coverage=1 00:18:10.694 --rc genhtml_legend=1 00:18:10.694 --rc geninfo_all_blocks=1 00:18:10.694 --rc geninfo_unexecuted_blocks=1 00:18:10.694 00:18:10.694 ' 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:10.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.694 --rc genhtml_branch_coverage=1 00:18:10.694 --rc genhtml_function_coverage=1 00:18:10.694 --rc genhtml_legend=1 00:18:10.694 --rc geninfo_all_blocks=1 00:18:10.694 --rc geninfo_unexecuted_blocks=1 00:18:10.694 00:18:10.694 ' 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:10.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.694 --rc genhtml_branch_coverage=1 00:18:10.694 --rc genhtml_function_coverage=1 00:18:10.694 --rc genhtml_legend=1 00:18:10.694 --rc geninfo_all_blocks=1 00:18:10.694 --rc geninfo_unexecuted_blocks=1 00:18:10.694 00:18:10.694 ' 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:10.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.694 --rc genhtml_branch_coverage=1 00:18:10.694 --rc genhtml_function_coverage=1 00:18:10.694 --rc genhtml_legend=1 00:18:10.694 --rc geninfo_all_blocks=1 00:18:10.694 --rc geninfo_unexecuted_blocks=1 00:18:10.694 00:18:10.694 ' 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:10.694 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:10.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=2f975a4f-cda3-4c63-b704-a2a05f574c17 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=92123c2e-4b88-4100-9743-3a9fa34e61b8 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a7ddb9e9-da89-4380-8ae6-eb79bd752863 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:10.695 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:13.237 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:13.237 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:13.237 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:13.237 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:13.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:13.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:18:13.237 00:18:13.237 --- 10.0.0.2 ping statistics --- 00:18:13.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.237 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:18:13.237 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:13.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:13.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:18:13.237 00:18:13.238 --- 10.0.0.1 ping statistics --- 00:18:13.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.238 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:18:13.238 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:13.238 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:13.238 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:13.238 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:13.238 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:13.238 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:13.238 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:13.238 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:13.238 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:13.238 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:13.238 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:13.238 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:13.238 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:13.238 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2954935 00:18:13.238 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:13.238 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2954935 00:18:13.238 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2954935 ']' 00:18:13.238 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.238 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.238 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.238 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.238 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:13.238 [2024-11-17 02:38:21.411497] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:13.238 [2024-11-17 02:38:21.411645] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.238 [2024-11-17 02:38:21.561640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.496 [2024-11-17 02:38:21.699679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.496 [2024-11-17 02:38:21.699742] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.496 [2024-11-17 02:38:21.699767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.496 [2024-11-17 02:38:21.699790] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.496 [2024-11-17 02:38:21.699810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.496 [2024-11-17 02:38:21.701321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.061 02:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.061 02:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:14.061 02:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:14.061 02:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:14.061 02:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:14.061 02:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.061 02:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:14.319 [2024-11-17 02:38:22.670540] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.319 02:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:14.319 02:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:14.319 02:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:14.884 Malloc1 00:18:14.884 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:15.142 Malloc2 00:18:15.142 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:15.400 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:15.658 02:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:15.916 [2024-11-17 02:38:24.245538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.916 02:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:15.916 02:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a7ddb9e9-da89-4380-8ae6-eb79bd752863 -a 10.0.0.2 -s 4420 -i 4 00:18:16.174 02:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:16.174 02:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:16.174 02:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:16.174 02:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:16.174 02:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:18.073 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:18.073 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:18.073 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:18.073 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:18.073 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:18.073 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:18.073 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:18.073 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:18.331 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:18.331 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:18.331 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:18.331 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:18.331 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:18.331 [ 0]:0x1 00:18:18.331 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:18.332 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:18.332 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=325024e3081f4f93b067f18e8b01ce8d 00:18:18.332 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 325024e3081f4f93b067f18e8b01ce8d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:18.332 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:18.590 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:18.590 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:18.590 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:18.590 [ 0]:0x1 00:18:18.590 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:18.590 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:18.590 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=325024e3081f4f93b067f18e8b01ce8d 00:18:18.590 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 325024e3081f4f93b067f18e8b01ce8d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:18.590 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:18.590 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:18.590 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:18.590 [ 1]:0x2 00:18:18.590 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:18.590 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:18.590 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28d74bfed1bd45d9941f52c24e641f2b 00:18:18.590 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28d74bfed1bd45d9941f52c24e641f2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:18.590 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:18.590 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:18.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:18.848 02:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:19.106 02:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:19.364 02:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:19.364 02:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a7ddb9e9-da89-4380-8ae6-eb79bd752863 -a 10.0.0.2 -s 4420 -i 4 00:18:19.623 02:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:19.623 02:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:19.623 02:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:19.623 02:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:18:19.623 02:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:18:19.623 02:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:21.522 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:21.522 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:21.522 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:21.522 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:21.522 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:21.522 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:21.522 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:21.522 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:21.522 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:21.522 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:21.522 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:21.522 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:21.522 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:21.522 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:21.522 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.522 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:21.522 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.522 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:21.522 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.522 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:21.522 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:21.522 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.781 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:21.781 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.781 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:21.781 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:21.781 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:21.781 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:21.781 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:21.781 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.781 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:21.781 [ 0]:0x2 00:18:21.781 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:21.781 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.781 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28d74bfed1bd45d9941f52c24e641f2b 00:18:21.781 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28d74bfed1bd45d9941f52c24e641f2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.781 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:22.039 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:22.039 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:22.039 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:22.039 [ 0]:0x1 00:18:22.039 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:22.039 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:22.039 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=325024e3081f4f93b067f18e8b01ce8d 00:18:22.039 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 325024e3081f4f93b067f18e8b01ce8d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:22.039 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:22.039 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:22.039 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:22.039 [ 1]:0x2 00:18:22.039 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:22.039 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:22.039 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28d74bfed1bd45d9941f52c24e641f2b 00:18:22.039 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28d74bfed1bd45d9941f52c24e641f2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:22.039 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:22.606 [ 0]:0x2 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28d74bfed1bd45d9941f52c24e641f2b 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28d74bfed1bd45d9941f52c24e641f2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:22.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:22.606 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:22.864 02:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:22.864 02:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a7ddb9e9-da89-4380-8ae6-eb79bd752863 -a 10.0.0.2 -s 4420 -i 4 00:18:23.123 02:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:23.123 02:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:23.123 02:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:23.123 02:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:23.123 02:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:23.123 02:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:25.023 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:25.023 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:25.023 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:25.023 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:25.023 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:25.023 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:25.023 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:25.023 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:25.023 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:25.023 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:25.023 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:25.023 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:25.023 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:25.023 [ 0]:0x1 00:18:25.023 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:25.023 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:25.023 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=325024e3081f4f93b067f18e8b01ce8d 00:18:25.023 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 325024e3081f4f93b067f18e8b01ce8d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:25.023 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:25.023 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:25.023 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:25.023 [ 1]:0x2 00:18:25.023 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:25.023 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:25.309 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28d74bfed1bd45d9941f52c24e641f2b 00:18:25.309 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28d74bfed1bd45d9941f52c24e641f2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:25.309 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:25.592 [ 0]:0x2 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28d74bfed1bd45d9941f52c24e641f2b 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28d74bfed1bd45d9941f52c24e641f2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:25.592 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:25.851 [2024-11-17 02:38:34.174741] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:25.851 request: 00:18:25.851 { 00:18:25.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.851 "nsid": 2, 00:18:25.851 "host": "nqn.2016-06.io.spdk:host1", 00:18:25.851 "method": "nvmf_ns_remove_host", 00:18:25.851 "req_id": 1 00:18:25.851 } 00:18:25.851 Got JSON-RPC error response 00:18:25.851 response: 00:18:25.851 { 00:18:25.851 "code": -32602, 00:18:25.851 "message": "Invalid parameters" 00:18:25.851 } 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:25.851 [ 0]:0x2 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28d74bfed1bd45d9941f52c24e641f2b 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28d74bfed1bd45d9941f52c24e641f2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:25.851 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:26.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:26.110 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2956558 00:18:26.110 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:26.110 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.110 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2956558 /var/tmp/host.sock 00:18:26.110 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2956558 ']' 00:18:26.110 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:26.110 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.110 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:26.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:26.110 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.110 02:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:26.110 [2024-11-17 02:38:34.439163] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:26.110 [2024-11-17 02:38:34.439297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2956558 ] 00:18:26.368 [2024-11-17 02:38:34.577727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.368 [2024-11-17 02:38:34.701860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.303 02:38:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.303 02:38:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:27.303 02:38:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:27.561 02:38:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:27.819 02:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 2f975a4f-cda3-4c63-b704-a2a05f574c17 00:18:27.819 02:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:27.819 02:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2F975A4FCDA34C63B704A2A05F574C17 -i 00:18:28.077 02:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 92123c2e-4b88-4100-9743-3a9fa34e61b8 00:18:28.077 02:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:28.077 02:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 92123C2E4B88410097433A9FA34E61B8 -i 00:18:28.642 02:38:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:28.642 02:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:28.900 02:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:28.900 02:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:29.466 nvme0n1 00:18:29.466 02:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:29.466 02:38:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:29.724 nvme1n2 00:18:29.724 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:29.724 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:29.724 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:29.724 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:29.724 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:29.982 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:29.982 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:29.982 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:29.982 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:30.240 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 2f975a4f-cda3-4c63-b704-a2a05f574c17 == \2\f\9\7\5\a\4\f\-\c\d\a\3\-\4\c\6\3\-\b\7\0\4\-\a\2\a\0\5\f\5\7\4\c\1\7 ]] 00:18:30.240 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:30.240 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:30.240 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:30.498 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 92123c2e-4b88-4100-9743-3a9fa34e61b8 == \9\2\1\2\3\c\2\e\-\4\b\8\8\-\4\1\0\0\-\9\7\4\3\-\3\a\9\f\a\3\4\e\6\1\b\8 ]] 00:18:30.498 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:30.756 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:31.014 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 2f975a4f-cda3-4c63-b704-a2a05f574c17 00:18:31.014 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:31.014 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2F975A4FCDA34C63B704A2A05F574C17 00:18:31.014 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:31.014 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2F975A4FCDA34C63B704A2A05F574C17 00:18:31.014 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:31.272 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.272 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:31.272 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.272 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:31.272 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.272 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:31.272 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:31.272 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2F975A4FCDA34C63B704A2A05F574C17 00:18:31.272 [2024-11-17 02:38:39.726545] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:31.272 [2024-11-17 02:38:39.726623] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:31.272 [2024-11-17 02:38:39.726647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.272 request: 00:18:31.272 { 00:18:31.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.272 "namespace": { 00:18:31.272 "bdev_name": "invalid", 00:18:31.272 "nsid": 1, 00:18:31.272 "nguid": "2F975A4FCDA34C63B704A2A05F574C17", 00:18:31.272 "no_auto_visible": false 00:18:31.272 }, 00:18:31.272 "method": "nvmf_subsystem_add_ns", 00:18:31.272 "req_id": 1 00:18:31.272 } 00:18:31.272 Got JSON-RPC error response 00:18:31.272 response: 00:18:31.272 { 00:18:31.272 "code": -32602, 00:18:31.272 "message": "Invalid parameters" 00:18:31.272 } 00:18:31.530 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:31.530 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:31.530 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:31.530 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:31.530 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 2f975a4f-cda3-4c63-b704-a2a05f574c17 00:18:31.530 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:31.530 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2F975A4FCDA34C63B704A2A05F574C17 -i 00:18:31.788 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:33.686 02:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:33.686 02:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:33.686 02:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:33.944 02:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:33.944 02:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2956558 00:18:33.944 02:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2956558 ']' 00:18:33.944 02:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2956558 00:18:33.944 02:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:33.944 02:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.944 02:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2956558 00:18:33.944 02:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:33.944 02:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:33.944 02:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2956558' 00:18:33.944 killing process with pid 2956558 00:18:33.944 02:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2956558 00:18:33.944 02:38:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2956558 00:18:36.473 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:36.473 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:36.474 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:36.474 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:36.474 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:36.474 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:36.474 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:36.474 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:36.474 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:36.474 rmmod nvme_tcp 00:18:36.474 rmmod nvme_fabrics 00:18:36.474 rmmod nvme_keyring 00:18:36.474 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:36.474 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:36.474 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:36.474 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2954935 ']' 00:18:36.474 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2954935 00:18:36.474 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2954935 ']' 00:18:36.474 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2954935 00:18:36.474 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:36.731 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.731 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2954935 00:18:36.731 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:36.731 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:36.731 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2954935' 00:18:36.731 killing process with pid 2954935 00:18:36.732 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2954935 00:18:36.732 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2954935 00:18:38.108 02:38:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:38.108 02:38:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:38.108 02:38:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:38.108 02:38:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:38.108 02:38:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:38.108 02:38:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:38.108 02:38:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:38.108 02:38:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:38.108 02:38:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:38.108 02:38:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.108 02:38:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:38.108 02:38:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.013 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:40.013 00:18:40.013 real 0m29.531s 00:18:40.013 user 0m43.752s 00:18:40.013 sys 0m4.827s 00:18:40.013 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.013 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:40.013 ************************************ 00:18:40.013 END TEST nvmf_ns_masking 00:18:40.013 ************************************ 00:18:40.272 02:38:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:40.272 02:38:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:40.272 02:38:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:40.272 02:38:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.272 02:38:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:40.272 ************************************ 00:18:40.272 START TEST nvmf_nvme_cli 00:18:40.272 ************************************ 00:18:40.272 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:40.272 * Looking for test storage... 00:18:40.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:40.272 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:40.272 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:40.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.273 --rc genhtml_branch_coverage=1 00:18:40.273 --rc genhtml_function_coverage=1 00:18:40.273 --rc genhtml_legend=1 00:18:40.273 --rc geninfo_all_blocks=1 00:18:40.273 --rc geninfo_unexecuted_blocks=1 00:18:40.273 00:18:40.273 ' 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:40.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.273 --rc genhtml_branch_coverage=1 00:18:40.273 --rc genhtml_function_coverage=1 00:18:40.273 --rc genhtml_legend=1 00:18:40.273 --rc geninfo_all_blocks=1 00:18:40.273 --rc geninfo_unexecuted_blocks=1 00:18:40.273 00:18:40.273 ' 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:40.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.273 --rc genhtml_branch_coverage=1 00:18:40.273 --rc genhtml_function_coverage=1 00:18:40.273 --rc genhtml_legend=1 00:18:40.273 --rc geninfo_all_blocks=1 00:18:40.273 --rc geninfo_unexecuted_blocks=1 00:18:40.273 00:18:40.273 ' 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:40.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.273 --rc genhtml_branch_coverage=1 00:18:40.273 --rc genhtml_function_coverage=1 00:18:40.273 --rc genhtml_legend=1 00:18:40.273 --rc geninfo_all_blocks=1 00:18:40.273 --rc geninfo_unexecuted_blocks=1 00:18:40.273 00:18:40.273 ' 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:40.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:40.273 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:40.274 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:40.274 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:40.274 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:40.274 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:40.274 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:40.274 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:40.274 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.274 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:40.274 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:40.274 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:40.274 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.274 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.274 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.274 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:40.274 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:40.274 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:40.274 02:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:42.808 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:42.808 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:42.808 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:42.809 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:42.809 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:42.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:18:42.809 00:18:42.809 --- 10.0.0.2 ping statistics --- 00:18:42.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.809 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:42.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:18:42.809 00:18:42.809 --- 10.0.0.1 ping statistics --- 00:18:42.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.809 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2959977 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2959977 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2959977 ']' 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.809 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:42.809 [2024-11-17 02:38:51.018662] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:42.809 [2024-11-17 02:38:51.018790] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.809 [2024-11-17 02:38:51.163129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:43.068 [2024-11-17 02:38:51.295338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.068 [2024-11-17 02:38:51.295417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.068 [2024-11-17 02:38:51.295442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.068 [2024-11-17 02:38:51.295465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.068 [2024-11-17 02:38:51.295485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.068 [2024-11-17 02:38:51.298225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.068 [2024-11-17 02:38:51.298284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.068 [2024-11-17 02:38:51.298331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.068 [2024-11-17 02:38:51.298341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:43.634 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.634 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:43.635 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:43.635 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:43.635 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:43.635 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.635 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:43.635 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.635 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:43.635 [2024-11-17 02:38:52.091915] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:43.893 Malloc0 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:43.893 Malloc1 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:43.893 [2024-11-17 02:38:52.301389] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.893 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:18:44.152 00:18:44.152 Discovery Log Number of Records 2, Generation counter 2 00:18:44.152 =====Discovery Log Entry 0====== 00:18:44.152 trtype: tcp 00:18:44.152 adrfam: ipv4 00:18:44.152 subtype: current discovery subsystem 00:18:44.152 treq: not required 00:18:44.152 portid: 0 00:18:44.152 trsvcid: 4420 00:18:44.152 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:44.152 traddr: 10.0.0.2 00:18:44.152 eflags: explicit discovery connections, duplicate discovery information 00:18:44.152 sectype: none 00:18:44.152 =====Discovery Log Entry 1====== 00:18:44.152 trtype: tcp 00:18:44.152 adrfam: ipv4 00:18:44.152 subtype: nvme subsystem 00:18:44.152 treq: not required 00:18:44.152 portid: 0 00:18:44.152 trsvcid: 4420 00:18:44.152 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:44.152 traddr: 10.0.0.2 00:18:44.152 eflags: none 00:18:44.152 sectype: none 00:18:44.152 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:44.152 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:44.152 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:44.152 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:44.152 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:44.152 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:44.152 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:44.152 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:44.152 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:44.152 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:44.152 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:45.087 02:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:45.087 02:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:45.087 02:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:45.087 02:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:45.087 02:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:45.087 02:38:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:46.987 /dev/nvme0n2 ]] 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:46.987 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:47.246 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:47.246 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:47.246 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:47.246 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:47.246 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:47.246 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:47.246 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:47.246 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:47.246 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:47.246 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:47.246 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:47.246 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:47.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:47.505 rmmod nvme_tcp 00:18:47.505 rmmod nvme_fabrics 00:18:47.505 rmmod nvme_keyring 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2959977 ']' 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2959977 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2959977 ']' 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2959977 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2959977 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2959977' 00:18:47.505 killing process with pid 2959977 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2959977 00:18:47.505 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2959977 00:18:49.409 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:49.409 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:49.409 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:49.409 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:49.409 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:49.409 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:49.409 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:49.409 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:49.409 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:49.409 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.409 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.409 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:51.317 00:18:51.317 real 0m10.897s 00:18:51.317 user 0m23.812s 00:18:51.317 sys 0m2.529s 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.317 ************************************ 00:18:51.317 END TEST nvmf_nvme_cli 00:18:51.317 ************************************ 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:51.317 ************************************ 00:18:51.317 START TEST nvmf_auth_target 00:18:51.317 ************************************ 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:51.317 * Looking for test storage... 00:18:51.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:51.317 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:51.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.318 --rc genhtml_branch_coverage=1 00:18:51.318 --rc genhtml_function_coverage=1 00:18:51.318 --rc genhtml_legend=1 00:18:51.318 --rc geninfo_all_blocks=1 00:18:51.318 --rc geninfo_unexecuted_blocks=1 00:18:51.318 00:18:51.318 ' 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:51.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.318 --rc genhtml_branch_coverage=1 00:18:51.318 --rc genhtml_function_coverage=1 00:18:51.318 --rc genhtml_legend=1 00:18:51.318 --rc geninfo_all_blocks=1 00:18:51.318 --rc geninfo_unexecuted_blocks=1 00:18:51.318 00:18:51.318 ' 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:51.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.318 --rc genhtml_branch_coverage=1 00:18:51.318 --rc genhtml_function_coverage=1 00:18:51.318 --rc genhtml_legend=1 00:18:51.318 --rc geninfo_all_blocks=1 00:18:51.318 --rc geninfo_unexecuted_blocks=1 00:18:51.318 00:18:51.318 ' 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:51.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.318 --rc genhtml_branch_coverage=1 00:18:51.318 --rc genhtml_function_coverage=1 00:18:51.318 --rc genhtml_legend=1 00:18:51.318 --rc geninfo_all_blocks=1 00:18:51.318 --rc geninfo_unexecuted_blocks=1 00:18:51.318 00:18:51.318 ' 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:51.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:51.318 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:53.850 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:53.850 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:53.850 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:53.850 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:53.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:53.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:18:53.850 00:18:53.850 --- 10.0.0.2 ping statistics --- 00:18:53.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.850 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:18:53.850 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:53.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:53.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:18:53.851 00:18:53.851 --- 10.0.0.1 ping statistics --- 00:18:53.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.851 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:18:53.851 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:53.851 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:18:53.851 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:53.851 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:53.851 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:53.851 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:53.851 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:53.851 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:53.851 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:53.851 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:53.851 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:53.851 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:53.851 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.851 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2962747 00:18:53.851 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:53.851 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2962747 00:18:53.851 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2962747 ']' 00:18:53.851 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.851 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.851 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.851 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.851 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.786 02:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.786 02:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:54.786 02:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:54.786 02:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:54.786 02:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.786 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.786 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2962902 00:18:54.786 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:54.786 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:54.786 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:54.786 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:54.786 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b31e1a0d376cf8dd4994405971d1c16695126e6e2c115349 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.DVw 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b31e1a0d376cf8dd4994405971d1c16695126e6e2c115349 0 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b31e1a0d376cf8dd4994405971d1c16695126e6e2c115349 0 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b31e1a0d376cf8dd4994405971d1c16695126e6e2c115349 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.DVw 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.DVw 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.DVw 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d462a033d13e190553a8fc3c1e30ff340b666a7677949ae9170a7f8a36cf513c 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.WK9 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d462a033d13e190553a8fc3c1e30ff340b666a7677949ae9170a7f8a36cf513c 3 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d462a033d13e190553a8fc3c1e30ff340b666a7677949ae9170a7f8a36cf513c 3 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d462a033d13e190553a8fc3c1e30ff340b666a7677949ae9170a7f8a36cf513c 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.WK9 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.WK9 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.WK9 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7df250c88bc8ddb5979f2e139c9cefcf 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.fXn 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7df250c88bc8ddb5979f2e139c9cefcf 1 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7df250c88bc8ddb5979f2e139c9cefcf 1 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7df250c88bc8ddb5979f2e139c9cefcf 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.fXn 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.fXn 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.fXn 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e84981fdf93a6bea4d6a89caad8d9863dd1480710dba12b4 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.1E1 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e84981fdf93a6bea4d6a89caad8d9863dd1480710dba12b4 2 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e84981fdf93a6bea4d6a89caad8d9863dd1480710dba12b4 2 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e84981fdf93a6bea4d6a89caad8d9863dd1480710dba12b4 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.1E1 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.1E1 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.1E1 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=283fbf76d4467bf3954b0d4523025a35a50d2b15af531b2f 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.rT0 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 283fbf76d4467bf3954b0d4523025a35a50d2b15af531b2f 2 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 283fbf76d4467bf3954b0d4523025a35a50d2b15af531b2f 2 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=283fbf76d4467bf3954b0d4523025a35a50d2b15af531b2f 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:54.787 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:55.046 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.rT0 00:18:55.046 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.rT0 00:18:55.046 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.rT0 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=742ad98f3af1577352d3d4a851bf85ed 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.UT2 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 742ad98f3af1577352d3d4a851bf85ed 1 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 742ad98f3af1577352d3d4a851bf85ed 1 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=742ad98f3af1577352d3d4a851bf85ed 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.UT2 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.UT2 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.UT2 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2fde68e8c36519c22385747b2670f43b4a8e3b75470669e627b3adb863a6ac21 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.A01 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2fde68e8c36519c22385747b2670f43b4a8e3b75470669e627b3adb863a6ac21 3 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2fde68e8c36519c22385747b2670f43b4a8e3b75470669e627b3adb863a6ac21 3 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2fde68e8c36519c22385747b2670f43b4a8e3b75470669e627b3adb863a6ac21 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.A01 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.A01 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.A01 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2962747 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2962747 ']' 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.047 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.305 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:55.305 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:55.305 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2962902 /var/tmp/host.sock 00:18:55.305 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2962902 ']' 00:18:55.305 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:55.305 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.306 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:55.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:55.306 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.306 02:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.872 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:55.872 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:55.872 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:55.872 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.872 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.130 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.130 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:56.130 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DVw 00:18:56.130 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.130 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.130 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.130 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.DVw 00:18:56.130 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.DVw 00:18:56.388 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.WK9 ]] 00:18:56.388 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WK9 00:18:56.388 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.388 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.388 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.388 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WK9 00:18:56.388 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WK9 00:18:56.646 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:56.646 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.fXn 00:18:56.646 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.646 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.646 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.646 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.fXn 00:18:56.646 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.fXn 00:18:56.904 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.1E1 ]] 00:18:56.904 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1E1 00:18:56.904 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.904 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.904 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.904 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1E1 00:18:56.904 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1E1 00:18:57.162 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:57.162 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.rT0 00:18:57.162 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.162 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.162 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.162 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.rT0 00:18:57.162 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.rT0 00:18:57.420 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.UT2 ]] 00:18:57.420 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UT2 00:18:57.420 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.679 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.679 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.679 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UT2 00:18:57.679 02:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UT2 00:18:57.937 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:57.937 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.A01 00:18:57.937 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.937 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.937 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.937 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.A01 00:18:57.937 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.A01 00:18:58.195 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:58.195 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:58.195 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.195 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.195 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:58.195 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:58.453 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:58.453 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.453 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:58.453 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:58.453 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:58.453 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.453 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.453 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.453 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.453 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.453 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.453 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.453 02:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.712 00:18:58.712 02:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.712 02:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.712 02:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.970 02:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.970 02:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.970 02:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.970 02:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.970 02:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.970 02:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.970 { 00:18:58.970 "cntlid": 1, 00:18:58.970 "qid": 0, 00:18:58.970 "state": "enabled", 00:18:58.970 "thread": "nvmf_tgt_poll_group_000", 00:18:58.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:58.970 "listen_address": { 00:18:58.970 "trtype": "TCP", 00:18:58.970 "adrfam": "IPv4", 00:18:58.970 "traddr": "10.0.0.2", 00:18:58.970 "trsvcid": "4420" 00:18:58.970 }, 00:18:58.970 "peer_address": { 00:18:58.970 "trtype": "TCP", 00:18:58.970 "adrfam": "IPv4", 00:18:58.970 "traddr": "10.0.0.1", 00:18:58.970 "trsvcid": "58260" 00:18:58.970 }, 00:18:58.970 "auth": { 00:18:58.970 "state": "completed", 00:18:58.970 "digest": "sha256", 00:18:58.970 "dhgroup": "null" 00:18:58.970 } 00:18:58.970 } 00:18:58.970 ]' 00:18:58.970 02:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.228 02:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.228 02:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.228 02:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:59.229 02:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.229 02:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.229 02:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.229 02:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.513 02:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:18:59.513 02:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:19:00.469 02:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.469 02:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:00.469 02:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.470 02:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.470 02:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.470 02:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.470 02:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:00.470 02:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:00.728 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:00.728 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.728 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:00.728 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:00.728 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:00.728 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.728 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.728 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.728 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.728 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.728 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.729 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.729 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.986 00:19:00.986 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.986 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.986 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.244 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.244 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.244 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.244 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.244 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.244 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.244 { 00:19:01.244 "cntlid": 3, 00:19:01.244 "qid": 0, 00:19:01.244 "state": "enabled", 00:19:01.244 "thread": "nvmf_tgt_poll_group_000", 00:19:01.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:01.244 "listen_address": { 00:19:01.244 "trtype": "TCP", 00:19:01.244 "adrfam": "IPv4", 00:19:01.244 "traddr": "10.0.0.2", 00:19:01.244 "trsvcid": "4420" 00:19:01.244 }, 00:19:01.244 "peer_address": { 00:19:01.244 "trtype": "TCP", 00:19:01.244 "adrfam": "IPv4", 00:19:01.244 "traddr": "10.0.0.1", 00:19:01.245 "trsvcid": "58294" 00:19:01.245 }, 00:19:01.245 "auth": { 00:19:01.245 "state": "completed", 00:19:01.245 "digest": "sha256", 00:19:01.245 "dhgroup": "null" 00:19:01.245 } 00:19:01.245 } 00:19:01.245 ]' 00:19:01.245 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.503 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.503 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.503 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:01.503 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.503 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.503 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.503 02:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.761 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:19:01.761 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:19:02.695 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.695 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:02.695 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.695 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.695 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.695 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.695 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:02.695 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:02.953 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:02.953 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.953 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:02.953 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:02.953 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:02.953 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.953 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.953 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.953 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.953 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.953 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.953 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.953 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.212 00:19:03.469 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.469 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.469 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.727 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.727 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.727 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.727 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.727 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.727 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.727 { 00:19:03.727 "cntlid": 5, 00:19:03.727 "qid": 0, 00:19:03.727 "state": "enabled", 00:19:03.727 "thread": "nvmf_tgt_poll_group_000", 00:19:03.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:03.727 "listen_address": { 00:19:03.727 "trtype": "TCP", 00:19:03.727 "adrfam": "IPv4", 00:19:03.727 "traddr": "10.0.0.2", 00:19:03.727 "trsvcid": "4420" 00:19:03.727 }, 00:19:03.727 "peer_address": { 00:19:03.727 "trtype": "TCP", 00:19:03.727 "adrfam": "IPv4", 00:19:03.728 "traddr": "10.0.0.1", 00:19:03.728 "trsvcid": "58316" 00:19:03.728 }, 00:19:03.728 "auth": { 00:19:03.728 "state": "completed", 00:19:03.728 "digest": "sha256", 00:19:03.728 "dhgroup": "null" 00:19:03.728 } 00:19:03.728 } 00:19:03.728 ]' 00:19:03.728 02:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.728 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:03.728 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.728 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:03.728 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.728 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.728 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.728 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.986 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:19:03.986 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:19:04.921 02:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.921 02:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.921 02:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.921 02:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.921 02:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.921 02:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.921 02:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:04.921 02:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:05.488 02:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:05.488 02:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.488 02:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:05.488 02:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:05.488 02:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:05.488 02:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.488 02:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:05.488 02:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.488 02:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.488 02:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.488 02:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:05.488 02:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:05.488 02:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:05.746 00:19:05.746 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.746 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.746 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.005 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.005 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.005 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.005 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.005 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.005 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.005 { 00:19:06.005 "cntlid": 7, 00:19:06.005 "qid": 0, 00:19:06.005 "state": "enabled", 00:19:06.005 "thread": "nvmf_tgt_poll_group_000", 00:19:06.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:06.005 "listen_address": { 00:19:06.005 "trtype": "TCP", 00:19:06.005 "adrfam": "IPv4", 00:19:06.005 "traddr": "10.0.0.2", 00:19:06.005 "trsvcid": "4420" 00:19:06.005 }, 00:19:06.005 "peer_address": { 00:19:06.005 "trtype": "TCP", 00:19:06.005 "adrfam": "IPv4", 00:19:06.005 "traddr": "10.0.0.1", 00:19:06.005 "trsvcid": "58334" 00:19:06.005 }, 00:19:06.005 "auth": { 00:19:06.005 "state": "completed", 00:19:06.005 "digest": "sha256", 00:19:06.005 "dhgroup": "null" 00:19:06.005 } 00:19:06.005 } 00:19:06.005 ]' 00:19:06.005 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.005 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.005 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.005 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:06.005 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.263 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.263 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.263 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.521 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:19:06.521 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:19:07.455 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.455 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:07.455 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.455 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.455 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.455 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.455 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.455 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:07.455 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:07.714 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:07.714 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.714 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:07.714 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:07.714 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:07.714 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.714 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.714 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.714 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.714 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.714 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.714 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.714 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.279 00:19:08.279 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.280 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.280 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.538 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.538 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.538 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.538 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.538 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.538 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.538 { 00:19:08.538 "cntlid": 9, 00:19:08.538 "qid": 0, 00:19:08.538 "state": "enabled", 00:19:08.538 "thread": "nvmf_tgt_poll_group_000", 00:19:08.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:08.538 "listen_address": { 00:19:08.538 "trtype": "TCP", 00:19:08.538 "adrfam": "IPv4", 00:19:08.538 "traddr": "10.0.0.2", 00:19:08.539 "trsvcid": "4420" 00:19:08.539 }, 00:19:08.539 "peer_address": { 00:19:08.539 "trtype": "TCP", 00:19:08.539 "adrfam": "IPv4", 00:19:08.539 "traddr": "10.0.0.1", 00:19:08.539 "trsvcid": "58352" 00:19:08.539 }, 00:19:08.539 "auth": { 00:19:08.539 "state": "completed", 00:19:08.539 "digest": "sha256", 00:19:08.539 "dhgroup": "ffdhe2048" 00:19:08.539 } 00:19:08.539 } 00:19:08.539 ]' 00:19:08.539 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.539 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.539 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.539 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:08.539 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.539 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.539 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.539 02:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.797 02:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:19:08.797 02:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:19:09.731 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.987 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:09.988 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.988 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.988 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.988 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.988 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:09.988 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:10.245 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:10.245 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.245 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:10.245 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:10.245 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:10.245 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.245 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.245 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.245 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.245 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.245 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.245 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.245 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.503 00:19:10.503 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.503 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.503 02:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.761 02:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.761 02:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.761 02:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.761 02:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.761 02:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.761 02:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.761 { 00:19:10.761 "cntlid": 11, 00:19:10.761 "qid": 0, 00:19:10.761 "state": "enabled", 00:19:10.761 "thread": "nvmf_tgt_poll_group_000", 00:19:10.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:10.761 "listen_address": { 00:19:10.761 "trtype": "TCP", 00:19:10.761 "adrfam": "IPv4", 00:19:10.761 "traddr": "10.0.0.2", 00:19:10.761 "trsvcid": "4420" 00:19:10.761 }, 00:19:10.761 "peer_address": { 00:19:10.761 "trtype": "TCP", 00:19:10.761 "adrfam": "IPv4", 00:19:10.761 "traddr": "10.0.0.1", 00:19:10.761 "trsvcid": "59120" 00:19:10.761 }, 00:19:10.761 "auth": { 00:19:10.761 "state": "completed", 00:19:10.761 "digest": "sha256", 00:19:10.761 "dhgroup": "ffdhe2048" 00:19:10.761 } 00:19:10.761 } 00:19:10.761 ]' 00:19:10.761 02:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.761 02:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.761 02:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.761 02:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:10.761 02:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.761 02:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.761 02:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.761 02:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.325 02:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:19:11.325 02:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:19:12.258 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.258 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.258 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.258 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.258 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.258 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.258 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:12.258 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:12.516 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:12.516 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.516 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:12.516 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:12.516 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:12.516 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.516 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.516 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.516 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.516 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.516 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.516 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.516 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.774 00:19:12.774 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.774 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.774 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.032 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.032 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.032 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.032 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.032 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.032 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.032 { 00:19:13.032 "cntlid": 13, 00:19:13.032 "qid": 0, 00:19:13.032 "state": "enabled", 00:19:13.032 "thread": "nvmf_tgt_poll_group_000", 00:19:13.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:13.032 "listen_address": { 00:19:13.032 "trtype": "TCP", 00:19:13.032 "adrfam": "IPv4", 00:19:13.032 "traddr": "10.0.0.2", 00:19:13.032 "trsvcid": "4420" 00:19:13.032 }, 00:19:13.032 "peer_address": { 00:19:13.032 "trtype": "TCP", 00:19:13.032 "adrfam": "IPv4", 00:19:13.032 "traddr": "10.0.0.1", 00:19:13.033 "trsvcid": "59148" 00:19:13.033 }, 00:19:13.033 "auth": { 00:19:13.033 "state": "completed", 00:19:13.033 "digest": "sha256", 00:19:13.033 "dhgroup": "ffdhe2048" 00:19:13.033 } 00:19:13.033 } 00:19:13.033 ]' 00:19:13.033 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.033 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.033 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.033 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:13.033 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.033 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.033 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.033 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.599 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:19:13.599 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:19:14.533 02:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.533 02:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:14.533 02:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.533 02:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.533 02:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.533 02:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.533 02:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:14.533 02:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:14.791 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:14.791 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.791 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:14.791 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:14.791 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:14.791 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.791 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:14.791 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.791 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.791 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.791 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:14.791 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:14.791 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:15.049 00:19:15.049 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.049 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.049 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.307 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.307 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.307 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.307 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.565 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.565 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.565 { 00:19:15.565 "cntlid": 15, 00:19:15.565 "qid": 0, 00:19:15.565 "state": "enabled", 00:19:15.565 "thread": "nvmf_tgt_poll_group_000", 00:19:15.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:15.565 "listen_address": { 00:19:15.565 "trtype": "TCP", 00:19:15.565 "adrfam": "IPv4", 00:19:15.565 "traddr": "10.0.0.2", 00:19:15.565 "trsvcid": "4420" 00:19:15.565 }, 00:19:15.565 "peer_address": { 00:19:15.565 "trtype": "TCP", 00:19:15.565 "adrfam": "IPv4", 00:19:15.565 "traddr": "10.0.0.1", 00:19:15.565 "trsvcid": "59188" 00:19:15.565 }, 00:19:15.565 "auth": { 00:19:15.565 "state": "completed", 00:19:15.565 "digest": "sha256", 00:19:15.565 "dhgroup": "ffdhe2048" 00:19:15.565 } 00:19:15.565 } 00:19:15.565 ]' 00:19:15.565 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.565 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.565 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.565 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:15.565 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.565 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.565 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.565 02:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.823 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:19:15.824 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:19:16.756 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.756 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.756 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.756 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.756 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.756 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.756 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.756 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:16.756 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:17.014 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:17.014 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.014 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:17.014 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:17.014 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:17.014 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.014 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.014 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.014 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.014 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.014 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.014 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.014 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.579 00:19:17.579 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.579 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.579 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.837 02:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.837 02:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.837 02:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.837 02:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.837 02:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.837 02:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.837 { 00:19:17.837 "cntlid": 17, 00:19:17.837 "qid": 0, 00:19:17.837 "state": "enabled", 00:19:17.837 "thread": "nvmf_tgt_poll_group_000", 00:19:17.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:17.837 "listen_address": { 00:19:17.837 "trtype": "TCP", 00:19:17.837 "adrfam": "IPv4", 00:19:17.837 "traddr": "10.0.0.2", 00:19:17.837 "trsvcid": "4420" 00:19:17.837 }, 00:19:17.837 "peer_address": { 00:19:17.837 "trtype": "TCP", 00:19:17.837 "adrfam": "IPv4", 00:19:17.837 "traddr": "10.0.0.1", 00:19:17.837 "trsvcid": "59210" 00:19:17.837 }, 00:19:17.837 "auth": { 00:19:17.837 "state": "completed", 00:19:17.837 "digest": "sha256", 00:19:17.837 "dhgroup": "ffdhe3072" 00:19:17.837 } 00:19:17.837 } 00:19:17.837 ]' 00:19:17.837 02:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.837 02:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.837 02:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.837 02:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:17.837 02:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.837 02:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.837 02:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.838 02:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.095 02:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:19:18.095 02:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:19:19.029 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.287 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.287 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.287 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.287 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.287 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.287 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:19.287 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:19.546 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:19.546 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.546 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:19.546 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:19.546 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:19.546 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.546 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.546 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.546 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.546 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.546 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.546 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.546 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.804 00:19:19.804 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.804 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.804 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.062 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.062 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.062 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.062 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.062 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.062 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.062 { 00:19:20.062 "cntlid": 19, 00:19:20.062 "qid": 0, 00:19:20.062 "state": "enabled", 00:19:20.062 "thread": "nvmf_tgt_poll_group_000", 00:19:20.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:20.062 "listen_address": { 00:19:20.062 "trtype": "TCP", 00:19:20.062 "adrfam": "IPv4", 00:19:20.062 "traddr": "10.0.0.2", 00:19:20.062 "trsvcid": "4420" 00:19:20.062 }, 00:19:20.062 "peer_address": { 00:19:20.062 "trtype": "TCP", 00:19:20.062 "adrfam": "IPv4", 00:19:20.062 "traddr": "10.0.0.1", 00:19:20.062 "trsvcid": "49432" 00:19:20.062 }, 00:19:20.062 "auth": { 00:19:20.062 "state": "completed", 00:19:20.062 "digest": "sha256", 00:19:20.062 "dhgroup": "ffdhe3072" 00:19:20.062 } 00:19:20.062 } 00:19:20.062 ]' 00:19:20.062 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.320 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.320 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.320 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:20.320 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.321 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.321 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.321 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.578 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:19:20.578 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:19:21.513 02:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.513 02:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.513 02:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.513 02:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.513 02:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.513 02:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.513 02:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:21.513 02:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:21.772 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:21.772 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.772 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:21.772 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:21.772 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:21.772 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.772 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.772 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.772 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.772 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.772 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.772 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.772 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.030 00:19:22.289 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.289 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.289 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.547 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.547 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.547 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.547 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.547 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.547 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.547 { 00:19:22.547 "cntlid": 21, 00:19:22.547 "qid": 0, 00:19:22.547 "state": "enabled", 00:19:22.547 "thread": "nvmf_tgt_poll_group_000", 00:19:22.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:22.547 "listen_address": { 00:19:22.547 "trtype": "TCP", 00:19:22.547 "adrfam": "IPv4", 00:19:22.547 "traddr": "10.0.0.2", 00:19:22.547 "trsvcid": "4420" 00:19:22.547 }, 00:19:22.547 "peer_address": { 00:19:22.547 "trtype": "TCP", 00:19:22.547 "adrfam": "IPv4", 00:19:22.547 "traddr": "10.0.0.1", 00:19:22.547 "trsvcid": "49462" 00:19:22.547 }, 00:19:22.547 "auth": { 00:19:22.547 "state": "completed", 00:19:22.547 "digest": "sha256", 00:19:22.547 "dhgroup": "ffdhe3072" 00:19:22.547 } 00:19:22.547 } 00:19:22.547 ]' 00:19:22.547 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.547 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.547 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.547 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:22.547 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.548 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.548 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.548 02:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.806 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:19:22.806 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:19:23.739 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.739 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.739 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.739 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.739 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.739 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.739 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:23.739 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:24.305 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:24.305 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.305 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:24.305 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:24.305 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:24.305 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.305 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:24.305 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.305 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.305 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.305 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:24.305 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:24.305 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:24.564 00:19:24.564 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.564 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.564 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.822 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.822 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.822 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.822 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.822 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.822 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.822 { 00:19:24.822 "cntlid": 23, 00:19:24.822 "qid": 0, 00:19:24.822 "state": "enabled", 00:19:24.822 "thread": "nvmf_tgt_poll_group_000", 00:19:24.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:24.822 "listen_address": { 00:19:24.822 "trtype": "TCP", 00:19:24.822 "adrfam": "IPv4", 00:19:24.822 "traddr": "10.0.0.2", 00:19:24.822 "trsvcid": "4420" 00:19:24.822 }, 00:19:24.822 "peer_address": { 00:19:24.822 "trtype": "TCP", 00:19:24.822 "adrfam": "IPv4", 00:19:24.822 "traddr": "10.0.0.1", 00:19:24.822 "trsvcid": "49490" 00:19:24.822 }, 00:19:24.822 "auth": { 00:19:24.822 "state": "completed", 00:19:24.822 "digest": "sha256", 00:19:24.822 "dhgroup": "ffdhe3072" 00:19:24.822 } 00:19:24.822 } 00:19:24.822 ]' 00:19:24.822 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.822 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.822 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.822 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:24.822 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.822 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.822 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.822 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.080 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:19:25.080 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:19:26.014 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.014 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.014 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.014 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.014 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.014 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.014 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.014 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:26.014 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:26.579 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:26.579 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.579 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:26.579 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:26.579 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:26.579 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.579 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.579 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.579 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.579 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.579 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.579 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.580 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.837 00:19:26.837 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.837 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.837 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.095 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.095 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.095 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.095 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.095 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.095 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.095 { 00:19:27.095 "cntlid": 25, 00:19:27.095 "qid": 0, 00:19:27.095 "state": "enabled", 00:19:27.095 "thread": "nvmf_tgt_poll_group_000", 00:19:27.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:27.095 "listen_address": { 00:19:27.095 "trtype": "TCP", 00:19:27.095 "adrfam": "IPv4", 00:19:27.095 "traddr": "10.0.0.2", 00:19:27.095 "trsvcid": "4420" 00:19:27.095 }, 00:19:27.095 "peer_address": { 00:19:27.095 "trtype": "TCP", 00:19:27.095 "adrfam": "IPv4", 00:19:27.095 "traddr": "10.0.0.1", 00:19:27.095 "trsvcid": "49524" 00:19:27.095 }, 00:19:27.095 "auth": { 00:19:27.095 "state": "completed", 00:19:27.095 "digest": "sha256", 00:19:27.095 "dhgroup": "ffdhe4096" 00:19:27.095 } 00:19:27.095 } 00:19:27.095 ]' 00:19:27.095 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.095 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.095 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.095 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:27.095 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.354 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.354 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.354 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.612 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:19:27.612 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:19:28.545 02:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.545 02:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.545 02:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.545 02:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.545 02:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.545 02:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.545 02:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:28.545 02:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:28.802 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:28.802 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.802 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:28.802 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:28.802 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:28.802 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.802 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.802 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.802 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.802 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.802 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.802 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.802 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.431 00:19:29.431 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.431 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.431 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.431 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.431 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.431 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.431 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.431 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.431 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.431 { 00:19:29.431 "cntlid": 27, 00:19:29.431 "qid": 0, 00:19:29.431 "state": "enabled", 00:19:29.431 "thread": "nvmf_tgt_poll_group_000", 00:19:29.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:29.431 "listen_address": { 00:19:29.431 "trtype": "TCP", 00:19:29.431 "adrfam": "IPv4", 00:19:29.431 "traddr": "10.0.0.2", 00:19:29.431 "trsvcid": "4420" 00:19:29.431 }, 00:19:29.431 "peer_address": { 00:19:29.431 "trtype": "TCP", 00:19:29.431 "adrfam": "IPv4", 00:19:29.431 "traddr": "10.0.0.1", 00:19:29.431 "trsvcid": "57336" 00:19:29.431 }, 00:19:29.431 "auth": { 00:19:29.431 "state": "completed", 00:19:29.431 "digest": "sha256", 00:19:29.431 "dhgroup": "ffdhe4096" 00:19:29.431 } 00:19:29.431 } 00:19:29.431 ]' 00:19:29.431 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.431 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.431 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.717 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:29.717 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.717 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.717 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.717 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.975 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:19:29.975 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:19:30.908 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.908 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.908 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.908 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.908 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.908 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.908 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:30.908 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:31.166 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:31.166 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.166 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:31.166 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:31.166 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:31.166 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.166 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.166 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.166 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.166 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.166 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.166 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.166 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.732 00:19:31.732 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.732 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.732 02:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.990 02:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.990 02:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.990 02:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.990 02:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.990 02:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.990 02:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.990 { 00:19:31.990 "cntlid": 29, 00:19:31.990 "qid": 0, 00:19:31.990 "state": "enabled", 00:19:31.990 "thread": "nvmf_tgt_poll_group_000", 00:19:31.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:31.990 "listen_address": { 00:19:31.990 "trtype": "TCP", 00:19:31.990 "adrfam": "IPv4", 00:19:31.990 "traddr": "10.0.0.2", 00:19:31.990 "trsvcid": "4420" 00:19:31.990 }, 00:19:31.990 "peer_address": { 00:19:31.990 "trtype": "TCP", 00:19:31.990 "adrfam": "IPv4", 00:19:31.990 "traddr": "10.0.0.1", 00:19:31.990 "trsvcid": "57364" 00:19:31.990 }, 00:19:31.990 "auth": { 00:19:31.990 "state": "completed", 00:19:31.990 "digest": "sha256", 00:19:31.990 "dhgroup": "ffdhe4096" 00:19:31.990 } 00:19:31.990 } 00:19:31.990 ]' 00:19:31.990 02:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.990 02:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.990 02:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.990 02:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:31.991 02:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.991 02:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.991 02:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.991 02:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.249 02:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:19:32.249 02:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:19:33.182 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.182 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.182 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.440 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.440 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.440 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.440 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:33.440 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:33.697 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:33.697 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.697 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:33.697 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:33.697 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:33.697 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.697 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:33.697 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.697 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.697 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.698 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:33.698 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:33.698 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:33.955 00:19:33.955 02:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.955 02:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.955 02:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.521 02:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.521 02:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.521 02:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.521 02:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.522 02:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.522 02:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.522 { 00:19:34.522 "cntlid": 31, 00:19:34.522 "qid": 0, 00:19:34.522 "state": "enabled", 00:19:34.522 "thread": "nvmf_tgt_poll_group_000", 00:19:34.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:34.522 "listen_address": { 00:19:34.522 "trtype": "TCP", 00:19:34.522 "adrfam": "IPv4", 00:19:34.522 "traddr": "10.0.0.2", 00:19:34.522 "trsvcid": "4420" 00:19:34.522 }, 00:19:34.522 "peer_address": { 00:19:34.522 "trtype": "TCP", 00:19:34.522 "adrfam": "IPv4", 00:19:34.522 "traddr": "10.0.0.1", 00:19:34.522 "trsvcid": "57390" 00:19:34.522 }, 00:19:34.522 "auth": { 00:19:34.522 "state": "completed", 00:19:34.522 "digest": "sha256", 00:19:34.522 "dhgroup": "ffdhe4096" 00:19:34.522 } 00:19:34.522 } 00:19:34.522 ]' 00:19:34.522 02:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.522 02:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.522 02:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.522 02:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:34.522 02:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.522 02:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.522 02:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.522 02:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.780 02:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:19:34.780 02:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:19:35.713 02:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.713 02:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.713 02:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.713 02:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.713 02:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.713 02:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.713 02:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.713 02:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:35.713 02:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:35.971 02:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:35.971 02:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.971 02:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:35.971 02:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:35.971 02:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:35.971 02:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.971 02:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.971 02:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.971 02:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.971 02:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.971 02:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.971 02:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.971 02:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.537 00:19:36.537 02:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.537 02:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.537 02:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.795 02:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.795 02:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.795 02:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.795 02:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.795 02:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.795 02:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.795 { 00:19:36.795 "cntlid": 33, 00:19:36.795 "qid": 0, 00:19:36.795 "state": "enabled", 00:19:36.795 "thread": "nvmf_tgt_poll_group_000", 00:19:36.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:36.795 "listen_address": { 00:19:36.795 "trtype": "TCP", 00:19:36.795 "adrfam": "IPv4", 00:19:36.795 "traddr": "10.0.0.2", 00:19:36.795 "trsvcid": "4420" 00:19:36.795 }, 00:19:36.795 "peer_address": { 00:19:36.795 "trtype": "TCP", 00:19:36.795 "adrfam": "IPv4", 00:19:36.795 "traddr": "10.0.0.1", 00:19:36.795 "trsvcid": "57424" 00:19:36.795 }, 00:19:36.795 "auth": { 00:19:36.795 "state": "completed", 00:19:36.795 "digest": "sha256", 00:19:36.795 "dhgroup": "ffdhe6144" 00:19:36.795 } 00:19:36.795 } 00:19:36.795 ]' 00:19:36.795 02:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.795 02:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.795 02:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.795 02:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:36.795 02:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.053 02:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.053 02:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.053 02:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.311 02:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:19:37.311 02:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:19:38.242 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.242 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.242 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.242 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.242 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.242 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.242 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:38.242 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:38.500 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:38.500 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.500 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:38.500 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:38.500 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:38.500 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.500 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.500 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.500 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.500 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.500 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.500 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.500 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.066 00:19:39.066 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.066 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.066 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.324 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.324 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.324 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.324 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.324 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.324 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.324 { 00:19:39.324 "cntlid": 35, 00:19:39.324 "qid": 0, 00:19:39.324 "state": "enabled", 00:19:39.324 "thread": "nvmf_tgt_poll_group_000", 00:19:39.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:39.324 "listen_address": { 00:19:39.324 "trtype": "TCP", 00:19:39.324 "adrfam": "IPv4", 00:19:39.324 "traddr": "10.0.0.2", 00:19:39.324 "trsvcid": "4420" 00:19:39.324 }, 00:19:39.324 "peer_address": { 00:19:39.324 "trtype": "TCP", 00:19:39.324 "adrfam": "IPv4", 00:19:39.324 "traddr": "10.0.0.1", 00:19:39.324 "trsvcid": "44360" 00:19:39.324 }, 00:19:39.324 "auth": { 00:19:39.324 "state": "completed", 00:19:39.324 "digest": "sha256", 00:19:39.324 "dhgroup": "ffdhe6144" 00:19:39.324 } 00:19:39.324 } 00:19:39.324 ]' 00:19:39.324 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.324 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.324 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.324 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:39.324 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.325 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.325 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.325 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.890 02:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:19:39.890 02:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:19:40.824 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.824 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.824 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.824 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.824 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.824 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.824 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:40.824 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:41.081 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:41.081 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.081 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:41.081 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:41.081 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:41.081 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.081 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.081 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.081 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.081 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.081 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.081 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.081 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.646 00:19:41.646 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.646 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.646 02:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.904 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.904 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.904 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.904 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.904 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.904 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.904 { 00:19:41.904 "cntlid": 37, 00:19:41.904 "qid": 0, 00:19:41.904 "state": "enabled", 00:19:41.904 "thread": "nvmf_tgt_poll_group_000", 00:19:41.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:41.904 "listen_address": { 00:19:41.904 "trtype": "TCP", 00:19:41.904 "adrfam": "IPv4", 00:19:41.904 "traddr": "10.0.0.2", 00:19:41.904 "trsvcid": "4420" 00:19:41.904 }, 00:19:41.904 "peer_address": { 00:19:41.904 "trtype": "TCP", 00:19:41.904 "adrfam": "IPv4", 00:19:41.904 "traddr": "10.0.0.1", 00:19:41.904 "trsvcid": "44374" 00:19:41.904 }, 00:19:41.904 "auth": { 00:19:41.904 "state": "completed", 00:19:41.904 "digest": "sha256", 00:19:41.904 "dhgroup": "ffdhe6144" 00:19:41.904 } 00:19:41.904 } 00:19:41.904 ]' 00:19:41.904 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.904 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.904 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.904 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:41.904 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.162 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.162 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.162 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.420 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:19:42.420 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:19:43.352 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.352 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.352 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.352 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.352 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.352 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.352 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:43.352 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:43.611 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:43.611 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.611 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.611 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:43.611 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:43.611 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.611 02:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:43.611 02:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.611 02:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.611 02:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.611 02:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:43.611 02:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:43.611 02:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:44.177 00:19:44.177 02:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.177 02:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.177 02:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.435 02:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.435 02:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.435 02:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.435 02:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.435 02:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.435 02:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.435 { 00:19:44.435 "cntlid": 39, 00:19:44.435 "qid": 0, 00:19:44.435 "state": "enabled", 00:19:44.435 "thread": "nvmf_tgt_poll_group_000", 00:19:44.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:44.435 "listen_address": { 00:19:44.435 "trtype": "TCP", 00:19:44.435 "adrfam": "IPv4", 00:19:44.435 "traddr": "10.0.0.2", 00:19:44.435 "trsvcid": "4420" 00:19:44.435 }, 00:19:44.435 "peer_address": { 00:19:44.435 "trtype": "TCP", 00:19:44.435 "adrfam": "IPv4", 00:19:44.435 "traddr": "10.0.0.1", 00:19:44.435 "trsvcid": "44404" 00:19:44.435 }, 00:19:44.435 "auth": { 00:19:44.435 "state": "completed", 00:19:44.435 "digest": "sha256", 00:19:44.435 "dhgroup": "ffdhe6144" 00:19:44.435 } 00:19:44.435 } 00:19:44.435 ]' 00:19:44.435 02:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.693 02:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.693 02:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.693 02:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:44.693 02:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.693 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.693 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.693 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.951 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:19:44.951 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:19:45.883 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.884 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.884 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.884 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.884 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.884 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:45.884 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.884 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:45.884 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:46.141 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:46.141 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.141 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.141 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:46.141 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:46.141 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.142 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.142 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.142 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.142 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.142 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.142 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.142 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.075 00:19:47.075 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.075 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.075 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.333 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.333 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.333 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.333 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.333 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.333 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.333 { 00:19:47.333 "cntlid": 41, 00:19:47.333 "qid": 0, 00:19:47.333 "state": "enabled", 00:19:47.333 "thread": "nvmf_tgt_poll_group_000", 00:19:47.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:47.333 "listen_address": { 00:19:47.333 "trtype": "TCP", 00:19:47.333 "adrfam": "IPv4", 00:19:47.333 "traddr": "10.0.0.2", 00:19:47.333 "trsvcid": "4420" 00:19:47.333 }, 00:19:47.333 "peer_address": { 00:19:47.333 "trtype": "TCP", 00:19:47.333 "adrfam": "IPv4", 00:19:47.333 "traddr": "10.0.0.1", 00:19:47.333 "trsvcid": "44428" 00:19:47.333 }, 00:19:47.333 "auth": { 00:19:47.333 "state": "completed", 00:19:47.333 "digest": "sha256", 00:19:47.333 "dhgroup": "ffdhe8192" 00:19:47.333 } 00:19:47.333 } 00:19:47.333 ]' 00:19:47.333 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.591 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.591 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.591 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:47.591 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.591 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.591 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.591 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.849 02:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:19:47.849 02:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:19:48.782 02:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.782 02:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.782 02:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.782 02:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.782 02:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.782 02:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.782 02:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:48.782 02:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:49.039 02:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:49.039 02:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.039 02:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:49.039 02:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:49.039 02:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:49.039 02:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.039 02:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.039 02:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.039 02:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.039 02:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.039 02:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.039 02:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.039 02:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.971 00:19:49.971 02:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.971 02:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.971 02:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.229 02:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.229 02:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.229 02:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.229 02:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.229 02:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.229 02:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.229 { 00:19:50.229 "cntlid": 43, 00:19:50.229 "qid": 0, 00:19:50.229 "state": "enabled", 00:19:50.229 "thread": "nvmf_tgt_poll_group_000", 00:19:50.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:50.229 "listen_address": { 00:19:50.229 "trtype": "TCP", 00:19:50.229 "adrfam": "IPv4", 00:19:50.229 "traddr": "10.0.0.2", 00:19:50.229 "trsvcid": "4420" 00:19:50.229 }, 00:19:50.229 "peer_address": { 00:19:50.229 "trtype": "TCP", 00:19:50.229 "adrfam": "IPv4", 00:19:50.229 "traddr": "10.0.0.1", 00:19:50.229 "trsvcid": "33156" 00:19:50.229 }, 00:19:50.229 "auth": { 00:19:50.229 "state": "completed", 00:19:50.229 "digest": "sha256", 00:19:50.229 "dhgroup": "ffdhe8192" 00:19:50.229 } 00:19:50.229 } 00:19:50.229 ]' 00:19:50.229 02:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.229 02:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.229 02:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.229 02:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:50.229 02:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.229 02:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.229 02:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.229 02:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.795 02:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:19:50.795 02:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:19:51.727 02:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.727 02:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.727 02:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.727 02:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.727 02:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.727 02:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.727 02:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:51.728 02:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:51.985 02:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:51.985 02:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.985 02:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.985 02:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:51.985 02:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:51.985 02:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.985 02:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.985 02:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.985 02:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.985 02:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.985 02:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.985 02:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.985 02:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.919 00:19:52.919 02:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.919 02:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.919 02:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.176 02:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.176 02:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.176 02:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.176 02:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.176 02:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.176 02:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.176 { 00:19:53.176 "cntlid": 45, 00:19:53.176 "qid": 0, 00:19:53.176 "state": "enabled", 00:19:53.176 "thread": "nvmf_tgt_poll_group_000", 00:19:53.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:53.176 "listen_address": { 00:19:53.176 "trtype": "TCP", 00:19:53.176 "adrfam": "IPv4", 00:19:53.176 "traddr": "10.0.0.2", 00:19:53.176 "trsvcid": "4420" 00:19:53.176 }, 00:19:53.176 "peer_address": { 00:19:53.176 "trtype": "TCP", 00:19:53.176 "adrfam": "IPv4", 00:19:53.177 "traddr": "10.0.0.1", 00:19:53.177 "trsvcid": "33180" 00:19:53.177 }, 00:19:53.177 "auth": { 00:19:53.177 "state": "completed", 00:19:53.177 "digest": "sha256", 00:19:53.177 "dhgroup": "ffdhe8192" 00:19:53.177 } 00:19:53.177 } 00:19:53.177 ]' 00:19:53.177 02:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.177 02:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.177 02:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.177 02:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:53.177 02:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.177 02:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.177 02:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.177 02:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.434 02:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:19:53.434 02:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:19:54.807 02:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.807 02:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.807 02:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.807 02:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.807 02:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.807 02:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.807 02:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:54.807 02:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:54.807 02:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:54.807 02:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.807 02:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.807 02:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:54.807 02:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:54.807 02:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.807 02:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:54.807 02:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.807 02:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.807 02:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.807 02:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:54.807 02:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.807 02:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:55.741 00:19:55.741 02:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.741 02:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.741 02:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.999 02:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.999 02:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.999 02:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.999 02:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.999 02:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.999 02:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.999 { 00:19:55.999 "cntlid": 47, 00:19:55.999 "qid": 0, 00:19:55.999 "state": "enabled", 00:19:55.999 "thread": "nvmf_tgt_poll_group_000", 00:19:55.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:55.999 "listen_address": { 00:19:55.999 "trtype": "TCP", 00:19:55.999 "adrfam": "IPv4", 00:19:55.999 "traddr": "10.0.0.2", 00:19:55.999 "trsvcid": "4420" 00:19:55.999 }, 00:19:55.999 "peer_address": { 00:19:55.999 "trtype": "TCP", 00:19:55.999 "adrfam": "IPv4", 00:19:55.999 "traddr": "10.0.0.1", 00:19:55.999 "trsvcid": "33208" 00:19:55.999 }, 00:19:55.999 "auth": { 00:19:55.999 "state": "completed", 00:19:55.999 "digest": "sha256", 00:19:55.999 "dhgroup": "ffdhe8192" 00:19:55.999 } 00:19:55.999 } 00:19:55.999 ]' 00:19:55.999 02:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.999 02:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.999 02:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.999 02:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:55.999 02:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.257 02:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.257 02:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.257 02:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.515 02:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:19:56.515 02:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:19:57.448 02:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.448 02:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.448 02:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.448 02:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.448 02:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.448 02:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:57.448 02:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.448 02:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.448 02:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:57.448 02:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:58.013 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:58.013 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.013 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:58.013 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:58.013 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:58.013 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.013 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.013 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.013 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.013 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.013 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.013 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.013 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.272 00:19:58.272 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.272 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.272 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.530 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.531 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.531 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.531 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.531 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.531 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.531 { 00:19:58.531 "cntlid": 49, 00:19:58.531 "qid": 0, 00:19:58.531 "state": "enabled", 00:19:58.531 "thread": "nvmf_tgt_poll_group_000", 00:19:58.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:58.531 "listen_address": { 00:19:58.531 "trtype": "TCP", 00:19:58.531 "adrfam": "IPv4", 00:19:58.531 "traddr": "10.0.0.2", 00:19:58.531 "trsvcid": "4420" 00:19:58.531 }, 00:19:58.531 "peer_address": { 00:19:58.531 "trtype": "TCP", 00:19:58.531 "adrfam": "IPv4", 00:19:58.531 "traddr": "10.0.0.1", 00:19:58.531 "trsvcid": "33228" 00:19:58.531 }, 00:19:58.531 "auth": { 00:19:58.531 "state": "completed", 00:19:58.531 "digest": "sha384", 00:19:58.531 "dhgroup": "null" 00:19:58.531 } 00:19:58.531 } 00:19:58.531 ]' 00:19:58.531 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.531 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.531 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.531 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:58.531 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.531 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.531 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.531 02:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.789 02:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:19:58.789 02:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:19:59.752 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.752 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.752 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.752 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.752 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.752 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.752 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:59.752 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:00.047 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:00.047 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.047 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:00.047 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:00.047 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:00.047 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.047 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.047 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.047 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.047 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.047 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.047 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.047 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.628 00:20:00.628 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.628 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.628 02:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.886 02:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.886 02:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.886 02:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.886 02:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.886 02:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.886 02:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.886 { 00:20:00.886 "cntlid": 51, 00:20:00.886 "qid": 0, 00:20:00.886 "state": "enabled", 00:20:00.886 "thread": "nvmf_tgt_poll_group_000", 00:20:00.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:00.886 "listen_address": { 00:20:00.886 "trtype": "TCP", 00:20:00.886 "adrfam": "IPv4", 00:20:00.886 "traddr": "10.0.0.2", 00:20:00.886 "trsvcid": "4420" 00:20:00.886 }, 00:20:00.886 "peer_address": { 00:20:00.886 "trtype": "TCP", 00:20:00.886 "adrfam": "IPv4", 00:20:00.886 "traddr": "10.0.0.1", 00:20:00.886 "trsvcid": "45710" 00:20:00.886 }, 00:20:00.886 "auth": { 00:20:00.886 "state": "completed", 00:20:00.886 "digest": "sha384", 00:20:00.886 "dhgroup": "null" 00:20:00.886 } 00:20:00.886 } 00:20:00.886 ]' 00:20:00.886 02:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.886 02:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.886 02:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.886 02:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:00.886 02:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.886 02:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.886 02:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.886 02:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.143 02:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:20:01.143 02:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:20:02.077 02:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.077 02:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.077 02:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.077 02:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.077 02:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.077 02:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.077 02:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:02.077 02:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:02.335 02:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:02.335 02:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.335 02:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:02.335 02:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:02.335 02:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:02.335 02:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.335 02:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.335 02:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.335 02:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.335 02:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.335 02:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.335 02:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.335 02:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.900 00:20:02.900 02:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.900 02:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.900 02:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.158 02:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.158 02:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.158 02:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.158 02:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.158 02:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.159 02:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.159 { 00:20:03.159 "cntlid": 53, 00:20:03.159 "qid": 0, 00:20:03.159 "state": "enabled", 00:20:03.159 "thread": "nvmf_tgt_poll_group_000", 00:20:03.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:03.159 "listen_address": { 00:20:03.159 "trtype": "TCP", 00:20:03.159 "adrfam": "IPv4", 00:20:03.159 "traddr": "10.0.0.2", 00:20:03.159 "trsvcid": "4420" 00:20:03.159 }, 00:20:03.159 "peer_address": { 00:20:03.159 "trtype": "TCP", 00:20:03.159 "adrfam": "IPv4", 00:20:03.159 "traddr": "10.0.0.1", 00:20:03.159 "trsvcid": "45740" 00:20:03.159 }, 00:20:03.159 "auth": { 00:20:03.159 "state": "completed", 00:20:03.159 "digest": "sha384", 00:20:03.159 "dhgroup": "null" 00:20:03.159 } 00:20:03.159 } 00:20:03.159 ]' 00:20:03.159 02:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.159 02:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.159 02:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.159 02:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:03.159 02:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.159 02:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.159 02:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.159 02:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.417 02:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:20:03.417 02:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:20:04.351 02:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.351 02:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.351 02:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.351 02:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.351 02:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.351 02:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.351 02:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:04.351 02:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:04.609 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:04.609 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.609 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:04.609 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:04.609 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:04.609 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.867 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:04.867 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.867 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.867 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.867 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:04.867 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:04.867 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.125 00:20:05.125 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.125 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.125 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.382 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.382 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.382 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.382 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.382 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.382 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.382 { 00:20:05.382 "cntlid": 55, 00:20:05.382 "qid": 0, 00:20:05.383 "state": "enabled", 00:20:05.383 "thread": "nvmf_tgt_poll_group_000", 00:20:05.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:05.383 "listen_address": { 00:20:05.383 "trtype": "TCP", 00:20:05.383 "adrfam": "IPv4", 00:20:05.383 "traddr": "10.0.0.2", 00:20:05.383 "trsvcid": "4420" 00:20:05.383 }, 00:20:05.383 "peer_address": { 00:20:05.383 "trtype": "TCP", 00:20:05.383 "adrfam": "IPv4", 00:20:05.383 "traddr": "10.0.0.1", 00:20:05.383 "trsvcid": "45772" 00:20:05.383 }, 00:20:05.383 "auth": { 00:20:05.383 "state": "completed", 00:20:05.383 "digest": "sha384", 00:20:05.383 "dhgroup": "null" 00:20:05.383 } 00:20:05.383 } 00:20:05.383 ]' 00:20:05.383 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.383 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.383 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.383 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:05.383 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.383 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.383 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.383 02:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.640 02:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:20:05.640 02:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:20:07.015 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.015 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.015 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.015 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.015 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.015 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.015 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.015 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:07.015 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:07.015 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:07.015 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.015 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:07.015 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:07.015 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:07.015 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.015 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.015 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.015 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.015 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.015 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.015 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.015 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.581 00:20:07.581 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.581 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.581 02:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.839 02:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.839 02:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.839 02:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.839 02:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.839 02:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.839 02:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.839 { 00:20:07.839 "cntlid": 57, 00:20:07.839 "qid": 0, 00:20:07.839 "state": "enabled", 00:20:07.839 "thread": "nvmf_tgt_poll_group_000", 00:20:07.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:07.839 "listen_address": { 00:20:07.839 "trtype": "TCP", 00:20:07.839 "adrfam": "IPv4", 00:20:07.839 "traddr": "10.0.0.2", 00:20:07.839 "trsvcid": "4420" 00:20:07.839 }, 00:20:07.839 "peer_address": { 00:20:07.839 "trtype": "TCP", 00:20:07.839 "adrfam": "IPv4", 00:20:07.839 "traddr": "10.0.0.1", 00:20:07.839 "trsvcid": "45786" 00:20:07.839 }, 00:20:07.839 "auth": { 00:20:07.839 "state": "completed", 00:20:07.839 "digest": "sha384", 00:20:07.839 "dhgroup": "ffdhe2048" 00:20:07.839 } 00:20:07.839 } 00:20:07.839 ]' 00:20:07.839 02:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.839 02:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.839 02:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.839 02:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:07.839 02:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.839 02:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.839 02:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.839 02:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.097 02:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:20:08.097 02:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:20:09.031 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.031 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.031 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.031 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.289 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.289 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.289 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:09.289 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:09.547 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:09.547 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.547 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:09.547 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:09.547 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:09.547 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.547 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.547 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.547 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.547 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.547 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.547 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.547 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.805 00:20:09.805 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.805 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.805 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.062 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.062 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.062 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.062 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.062 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.062 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.062 { 00:20:10.062 "cntlid": 59, 00:20:10.062 "qid": 0, 00:20:10.062 "state": "enabled", 00:20:10.062 "thread": "nvmf_tgt_poll_group_000", 00:20:10.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:10.062 "listen_address": { 00:20:10.062 "trtype": "TCP", 00:20:10.062 "adrfam": "IPv4", 00:20:10.062 "traddr": "10.0.0.2", 00:20:10.062 "trsvcid": "4420" 00:20:10.062 }, 00:20:10.062 "peer_address": { 00:20:10.062 "trtype": "TCP", 00:20:10.062 "adrfam": "IPv4", 00:20:10.062 "traddr": "10.0.0.1", 00:20:10.062 "trsvcid": "46812" 00:20:10.062 }, 00:20:10.062 "auth": { 00:20:10.062 "state": "completed", 00:20:10.062 "digest": "sha384", 00:20:10.062 "dhgroup": "ffdhe2048" 00:20:10.062 } 00:20:10.062 } 00:20:10.062 ]' 00:20:10.062 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.320 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.320 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.320 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:10.320 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.320 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.320 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.320 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.578 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:20:10.578 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:20:11.511 02:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.511 02:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.511 02:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.511 02:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.511 02:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.511 02:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.511 02:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:11.511 02:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:11.768 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:11.768 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.768 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:11.768 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:11.768 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:11.768 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.768 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.769 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.769 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.769 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.769 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.769 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.769 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.334 00:20:12.334 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.335 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.335 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.593 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.593 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.593 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.593 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.593 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.593 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.593 { 00:20:12.593 "cntlid": 61, 00:20:12.593 "qid": 0, 00:20:12.593 "state": "enabled", 00:20:12.593 "thread": "nvmf_tgt_poll_group_000", 00:20:12.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:12.593 "listen_address": { 00:20:12.593 "trtype": "TCP", 00:20:12.593 "adrfam": "IPv4", 00:20:12.593 "traddr": "10.0.0.2", 00:20:12.593 "trsvcid": "4420" 00:20:12.593 }, 00:20:12.593 "peer_address": { 00:20:12.593 "trtype": "TCP", 00:20:12.593 "adrfam": "IPv4", 00:20:12.593 "traddr": "10.0.0.1", 00:20:12.593 "trsvcid": "46830" 00:20:12.593 }, 00:20:12.593 "auth": { 00:20:12.593 "state": "completed", 00:20:12.593 "digest": "sha384", 00:20:12.593 "dhgroup": "ffdhe2048" 00:20:12.593 } 00:20:12.593 } 00:20:12.593 ]' 00:20:12.593 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.593 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.593 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.593 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:12.593 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.594 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.594 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.594 02:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.160 02:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:20:13.160 02:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:20:14.094 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.094 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.094 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.094 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.094 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.094 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.094 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:14.094 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:14.352 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:14.352 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.352 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.352 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:14.352 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:14.352 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.352 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:14.352 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.352 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.352 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.352 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:14.352 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.352 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.610 00:20:14.610 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.610 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.610 02:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.868 02:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.868 02:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.868 02:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.868 02:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.127 02:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.127 02:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.127 { 00:20:15.127 "cntlid": 63, 00:20:15.127 "qid": 0, 00:20:15.127 "state": "enabled", 00:20:15.127 "thread": "nvmf_tgt_poll_group_000", 00:20:15.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:15.127 "listen_address": { 00:20:15.127 "trtype": "TCP", 00:20:15.127 "adrfam": "IPv4", 00:20:15.127 "traddr": "10.0.0.2", 00:20:15.127 "trsvcid": "4420" 00:20:15.127 }, 00:20:15.127 "peer_address": { 00:20:15.127 "trtype": "TCP", 00:20:15.127 "adrfam": "IPv4", 00:20:15.127 "traddr": "10.0.0.1", 00:20:15.127 "trsvcid": "46868" 00:20:15.127 }, 00:20:15.127 "auth": { 00:20:15.127 "state": "completed", 00:20:15.127 "digest": "sha384", 00:20:15.127 "dhgroup": "ffdhe2048" 00:20:15.127 } 00:20:15.127 } 00:20:15.127 ]' 00:20:15.127 02:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.127 02:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.127 02:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.127 02:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:15.127 02:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.127 02:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.127 02:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.127 02:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.385 02:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:20:15.385 02:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:20:16.319 02:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.319 02:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.319 02:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.319 02:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.319 02:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.319 02:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.319 02:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.319 02:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:16.319 02:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:16.885 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:16.885 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.885 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.885 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:16.885 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:16.885 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.885 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.885 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.885 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.885 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.885 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.885 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.885 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.144 00:20:17.144 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.144 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.144 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.402 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.402 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.402 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.402 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.402 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.402 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.402 { 00:20:17.402 "cntlid": 65, 00:20:17.402 "qid": 0, 00:20:17.402 "state": "enabled", 00:20:17.402 "thread": "nvmf_tgt_poll_group_000", 00:20:17.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:17.402 "listen_address": { 00:20:17.402 "trtype": "TCP", 00:20:17.402 "adrfam": "IPv4", 00:20:17.402 "traddr": "10.0.0.2", 00:20:17.402 "trsvcid": "4420" 00:20:17.402 }, 00:20:17.402 "peer_address": { 00:20:17.402 "trtype": "TCP", 00:20:17.402 "adrfam": "IPv4", 00:20:17.402 "traddr": "10.0.0.1", 00:20:17.402 "trsvcid": "46886" 00:20:17.402 }, 00:20:17.402 "auth": { 00:20:17.402 "state": "completed", 00:20:17.402 "digest": "sha384", 00:20:17.402 "dhgroup": "ffdhe3072" 00:20:17.402 } 00:20:17.402 } 00:20:17.402 ]' 00:20:17.402 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.402 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.402 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.402 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:17.402 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.402 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.402 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.402 02:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.968 02:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:20:17.968 02:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:20:18.901 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.901 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.901 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.901 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.901 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.901 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.901 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:18.901 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:19.159 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:19.159 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.159 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.159 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:19.159 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:19.159 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.159 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.160 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.160 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.160 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.160 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.160 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.160 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.417 00:20:19.417 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.417 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.417 02:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.676 02:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.676 02:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.676 02:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.676 02:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.676 02:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.676 02:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.676 { 00:20:19.676 "cntlid": 67, 00:20:19.676 "qid": 0, 00:20:19.676 "state": "enabled", 00:20:19.676 "thread": "nvmf_tgt_poll_group_000", 00:20:19.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:19.676 "listen_address": { 00:20:19.676 "trtype": "TCP", 00:20:19.676 "adrfam": "IPv4", 00:20:19.676 "traddr": "10.0.0.2", 00:20:19.676 "trsvcid": "4420" 00:20:19.676 }, 00:20:19.676 "peer_address": { 00:20:19.676 "trtype": "TCP", 00:20:19.676 "adrfam": "IPv4", 00:20:19.676 "traddr": "10.0.0.1", 00:20:19.676 "trsvcid": "33258" 00:20:19.676 }, 00:20:19.676 "auth": { 00:20:19.676 "state": "completed", 00:20:19.676 "digest": "sha384", 00:20:19.676 "dhgroup": "ffdhe3072" 00:20:19.676 } 00:20:19.676 } 00:20:19.676 ]' 00:20:19.676 02:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.676 02:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.676 02:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.676 02:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:19.676 02:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.933 02:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.933 02:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.933 02:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.191 02:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:20:20.191 02:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:20:21.125 02:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.125 02:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.125 02:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.125 02:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.125 02:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.125 02:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.125 02:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:21.125 02:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:21.383 02:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:21.383 02:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.383 02:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:21.383 02:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:21.383 02:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:21.383 02:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.383 02:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.383 02:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.383 02:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.383 02:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.383 02:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.383 02:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.383 02:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.641 00:20:21.898 02:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.898 02:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.898 02:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.157 02:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.157 02:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.157 02:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.157 02:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.157 02:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.157 02:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.157 { 00:20:22.157 "cntlid": 69, 00:20:22.157 "qid": 0, 00:20:22.157 "state": "enabled", 00:20:22.157 "thread": "nvmf_tgt_poll_group_000", 00:20:22.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:22.157 "listen_address": { 00:20:22.157 "trtype": "TCP", 00:20:22.157 "adrfam": "IPv4", 00:20:22.157 "traddr": "10.0.0.2", 00:20:22.157 "trsvcid": "4420" 00:20:22.157 }, 00:20:22.157 "peer_address": { 00:20:22.157 "trtype": "TCP", 00:20:22.157 "adrfam": "IPv4", 00:20:22.157 "traddr": "10.0.0.1", 00:20:22.157 "trsvcid": "33280" 00:20:22.157 }, 00:20:22.157 "auth": { 00:20:22.157 "state": "completed", 00:20:22.157 "digest": "sha384", 00:20:22.157 "dhgroup": "ffdhe3072" 00:20:22.157 } 00:20:22.157 } 00:20:22.157 ]' 00:20:22.157 02:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.157 02:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.157 02:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.157 02:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:22.157 02:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.157 02:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.157 02:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.157 02:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.416 02:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:20:22.416 02:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:20:23.351 02:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.351 02:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.351 02:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.351 02:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.351 02:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.351 02:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.351 02:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:23.351 02:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:23.917 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:23.917 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.917 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:23.917 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:23.917 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:23.917 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.918 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:23.918 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.918 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.918 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.918 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:23.918 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:23.918 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:24.176 00:20:24.176 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.176 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.176 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.434 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.434 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.434 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.434 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.434 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.434 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.434 { 00:20:24.434 "cntlid": 71, 00:20:24.434 "qid": 0, 00:20:24.434 "state": "enabled", 00:20:24.434 "thread": "nvmf_tgt_poll_group_000", 00:20:24.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:24.434 "listen_address": { 00:20:24.434 "trtype": "TCP", 00:20:24.434 "adrfam": "IPv4", 00:20:24.434 "traddr": "10.0.0.2", 00:20:24.434 "trsvcid": "4420" 00:20:24.434 }, 00:20:24.434 "peer_address": { 00:20:24.434 "trtype": "TCP", 00:20:24.434 "adrfam": "IPv4", 00:20:24.434 "traddr": "10.0.0.1", 00:20:24.434 "trsvcid": "33310" 00:20:24.434 }, 00:20:24.434 "auth": { 00:20:24.434 "state": "completed", 00:20:24.434 "digest": "sha384", 00:20:24.434 "dhgroup": "ffdhe3072" 00:20:24.434 } 00:20:24.434 } 00:20:24.434 ]' 00:20:24.434 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.434 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.434 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.434 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:24.434 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.434 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.434 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.434 02:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.692 02:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:20:24.692 02:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:20:25.625 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.625 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.625 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.625 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.884 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.884 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.884 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.884 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:25.884 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:26.142 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:26.142 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.142 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:26.142 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:26.142 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:26.142 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.142 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.142 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.142 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.142 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.142 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.142 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.142 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.400 00:20:26.400 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.400 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.400 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.658 02:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.658 02:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.658 02:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.658 02:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.658 02:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.658 02:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.658 { 00:20:26.658 "cntlid": 73, 00:20:26.658 "qid": 0, 00:20:26.658 "state": "enabled", 00:20:26.658 "thread": "nvmf_tgt_poll_group_000", 00:20:26.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:26.658 "listen_address": { 00:20:26.658 "trtype": "TCP", 00:20:26.658 "adrfam": "IPv4", 00:20:26.658 "traddr": "10.0.0.2", 00:20:26.658 "trsvcid": "4420" 00:20:26.658 }, 00:20:26.658 "peer_address": { 00:20:26.658 "trtype": "TCP", 00:20:26.658 "adrfam": "IPv4", 00:20:26.658 "traddr": "10.0.0.1", 00:20:26.658 "trsvcid": "33338" 00:20:26.658 }, 00:20:26.658 "auth": { 00:20:26.658 "state": "completed", 00:20:26.658 "digest": "sha384", 00:20:26.658 "dhgroup": "ffdhe4096" 00:20:26.658 } 00:20:26.658 } 00:20:26.658 ]' 00:20:26.658 02:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.658 02:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.658 02:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.916 02:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:26.916 02:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.916 02:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.916 02:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.916 02:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.173 02:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:20:27.174 02:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:20:28.107 02:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.107 02:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.107 02:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.107 02:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.107 02:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.107 02:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.107 02:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:28.107 02:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:28.365 02:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:28.365 02:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.365 02:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.365 02:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:28.365 02:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:28.365 02:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.365 02:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.365 02:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.365 02:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.365 02:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.365 02:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.365 02:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.365 02:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.930 00:20:28.930 02:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.930 02:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.930 02:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.188 02:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.188 02:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.188 02:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.188 02:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.188 02:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.188 02:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.188 { 00:20:29.188 "cntlid": 75, 00:20:29.188 "qid": 0, 00:20:29.188 "state": "enabled", 00:20:29.188 "thread": "nvmf_tgt_poll_group_000", 00:20:29.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:29.188 "listen_address": { 00:20:29.188 "trtype": "TCP", 00:20:29.188 "adrfam": "IPv4", 00:20:29.188 "traddr": "10.0.0.2", 00:20:29.188 "trsvcid": "4420" 00:20:29.188 }, 00:20:29.188 "peer_address": { 00:20:29.188 "trtype": "TCP", 00:20:29.188 "adrfam": "IPv4", 00:20:29.188 "traddr": "10.0.0.1", 00:20:29.188 "trsvcid": "46314" 00:20:29.188 }, 00:20:29.188 "auth": { 00:20:29.188 "state": "completed", 00:20:29.188 "digest": "sha384", 00:20:29.188 "dhgroup": "ffdhe4096" 00:20:29.188 } 00:20:29.188 } 00:20:29.188 ]' 00:20:29.188 02:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.188 02:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.188 02:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.188 02:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:29.188 02:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.188 02:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.188 02:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.188 02:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.481 02:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:20:29.481 02:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:20:30.471 02:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.471 02:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.471 02:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.471 02:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.471 02:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.471 02:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.471 02:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:30.471 02:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:30.730 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:30.730 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.730 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:30.730 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:30.730 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:30.730 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.730 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.730 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.730 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.730 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.730 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.730 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.730 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.295 00:20:31.295 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.295 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.295 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.554 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.554 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.554 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.554 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.554 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.554 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.554 { 00:20:31.554 "cntlid": 77, 00:20:31.554 "qid": 0, 00:20:31.554 "state": "enabled", 00:20:31.554 "thread": "nvmf_tgt_poll_group_000", 00:20:31.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:31.554 "listen_address": { 00:20:31.554 "trtype": "TCP", 00:20:31.554 "adrfam": "IPv4", 00:20:31.554 "traddr": "10.0.0.2", 00:20:31.554 "trsvcid": "4420" 00:20:31.554 }, 00:20:31.554 "peer_address": { 00:20:31.554 "trtype": "TCP", 00:20:31.554 "adrfam": "IPv4", 00:20:31.554 "traddr": "10.0.0.1", 00:20:31.554 "trsvcid": "46334" 00:20:31.554 }, 00:20:31.554 "auth": { 00:20:31.554 "state": "completed", 00:20:31.554 "digest": "sha384", 00:20:31.554 "dhgroup": "ffdhe4096" 00:20:31.554 } 00:20:31.554 } 00:20:31.554 ]' 00:20:31.554 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.554 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.554 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.554 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:31.554 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.554 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.554 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.554 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.121 02:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:20:32.121 02:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:20:33.054 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.054 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.054 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.054 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.054 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.054 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.054 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:33.054 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:33.313 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:33.313 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.313 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.313 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:33.313 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:33.313 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.313 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:33.313 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.313 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.313 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.313 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:33.313 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:33.313 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:33.879 00:20:33.879 02:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.879 02:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.879 02:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.137 02:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.137 02:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.137 02:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.137 02:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.137 02:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.137 02:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.137 { 00:20:34.137 "cntlid": 79, 00:20:34.137 "qid": 0, 00:20:34.137 "state": "enabled", 00:20:34.137 "thread": "nvmf_tgt_poll_group_000", 00:20:34.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:34.137 "listen_address": { 00:20:34.137 "trtype": "TCP", 00:20:34.137 "adrfam": "IPv4", 00:20:34.137 "traddr": "10.0.0.2", 00:20:34.137 "trsvcid": "4420" 00:20:34.137 }, 00:20:34.137 "peer_address": { 00:20:34.137 "trtype": "TCP", 00:20:34.137 "adrfam": "IPv4", 00:20:34.137 "traddr": "10.0.0.1", 00:20:34.137 "trsvcid": "46358" 00:20:34.137 }, 00:20:34.137 "auth": { 00:20:34.137 "state": "completed", 00:20:34.137 "digest": "sha384", 00:20:34.137 "dhgroup": "ffdhe4096" 00:20:34.137 } 00:20:34.137 } 00:20:34.137 ]' 00:20:34.137 02:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.137 02:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.137 02:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.137 02:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:34.137 02:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.137 02:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.137 02:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.137 02:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.396 02:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:20:34.396 02:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:20:35.330 02:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.330 02:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.330 02:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.330 02:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.330 02:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.330 02:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.330 02:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.330 02:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:35.330 02:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:35.895 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:35.895 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.895 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:35.895 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:35.895 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:35.895 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.895 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.895 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.896 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.896 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.896 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.896 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.896 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.461 00:20:36.461 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.461 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.461 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.720 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.720 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.720 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.720 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.720 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.720 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.720 { 00:20:36.720 "cntlid": 81, 00:20:36.720 "qid": 0, 00:20:36.720 "state": "enabled", 00:20:36.720 "thread": "nvmf_tgt_poll_group_000", 00:20:36.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:36.720 "listen_address": { 00:20:36.720 "trtype": "TCP", 00:20:36.720 "adrfam": "IPv4", 00:20:36.720 "traddr": "10.0.0.2", 00:20:36.720 "trsvcid": "4420" 00:20:36.720 }, 00:20:36.720 "peer_address": { 00:20:36.720 "trtype": "TCP", 00:20:36.720 "adrfam": "IPv4", 00:20:36.720 "traddr": "10.0.0.1", 00:20:36.720 "trsvcid": "46388" 00:20:36.720 }, 00:20:36.720 "auth": { 00:20:36.720 "state": "completed", 00:20:36.720 "digest": "sha384", 00:20:36.720 "dhgroup": "ffdhe6144" 00:20:36.720 } 00:20:36.720 } 00:20:36.720 ]' 00:20:36.720 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.721 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.721 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.721 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:36.721 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.721 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.721 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.721 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.979 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:20:36.979 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:20:37.912 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.171 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.171 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.171 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.171 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.171 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.171 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.171 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.429 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:38.429 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.429 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.429 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:38.429 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:38.429 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.429 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.429 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.429 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.429 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.429 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.429 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.429 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.994 00:20:38.994 02:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.994 02:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.995 02:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.252 02:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.252 02:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.252 02:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.252 02:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.252 02:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.252 02:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.252 { 00:20:39.252 "cntlid": 83, 00:20:39.252 "qid": 0, 00:20:39.252 "state": "enabled", 00:20:39.252 "thread": "nvmf_tgt_poll_group_000", 00:20:39.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:39.252 "listen_address": { 00:20:39.252 "trtype": "TCP", 00:20:39.252 "adrfam": "IPv4", 00:20:39.252 "traddr": "10.0.0.2", 00:20:39.252 "trsvcid": "4420" 00:20:39.252 }, 00:20:39.252 "peer_address": { 00:20:39.252 "trtype": "TCP", 00:20:39.252 "adrfam": "IPv4", 00:20:39.252 "traddr": "10.0.0.1", 00:20:39.252 "trsvcid": "38194" 00:20:39.252 }, 00:20:39.252 "auth": { 00:20:39.252 "state": "completed", 00:20:39.252 "digest": "sha384", 00:20:39.252 "dhgroup": "ffdhe6144" 00:20:39.252 } 00:20:39.252 } 00:20:39.252 ]' 00:20:39.252 02:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.252 02:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.252 02:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.511 02:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:39.511 02:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.511 02:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.511 02:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.511 02:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.769 02:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:20:39.769 02:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:20:40.703 02:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.703 02:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.703 02:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.703 02:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.703 02:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.703 02:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.703 02:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:40.703 02:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:40.960 02:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:40.960 02:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.960 02:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.960 02:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:40.960 02:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:40.960 02:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.960 02:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.960 02:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.960 02:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.960 02:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.960 02:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.960 02:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.960 02:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.526 00:20:41.526 02:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.526 02:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.526 02:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.784 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.784 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.784 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.784 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.784 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.784 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.784 { 00:20:41.784 "cntlid": 85, 00:20:41.784 "qid": 0, 00:20:41.784 "state": "enabled", 00:20:41.784 "thread": "nvmf_tgt_poll_group_000", 00:20:41.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:41.784 "listen_address": { 00:20:41.784 "trtype": "TCP", 00:20:41.784 "adrfam": "IPv4", 00:20:41.784 "traddr": "10.0.0.2", 00:20:41.784 "trsvcid": "4420" 00:20:41.784 }, 00:20:41.784 "peer_address": { 00:20:41.784 "trtype": "TCP", 00:20:41.784 "adrfam": "IPv4", 00:20:41.784 "traddr": "10.0.0.1", 00:20:41.784 "trsvcid": "38230" 00:20:41.784 }, 00:20:41.784 "auth": { 00:20:41.784 "state": "completed", 00:20:41.784 "digest": "sha384", 00:20:41.784 "dhgroup": "ffdhe6144" 00:20:41.784 } 00:20:41.784 } 00:20:41.784 ]' 00:20:41.784 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.784 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.784 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.784 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:41.784 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.042 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.042 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.042 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.300 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:20:42.300 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:20:43.233 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.233 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.233 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.233 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.233 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.233 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.233 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.233 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.491 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:43.491 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.491 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.491 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:43.491 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:43.491 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.491 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:43.491 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.491 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.491 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.491 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:43.491 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:43.491 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.056 00:20:44.056 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.056 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.056 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.314 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.314 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.314 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.314 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.314 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.314 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.314 { 00:20:44.314 "cntlid": 87, 00:20:44.314 "qid": 0, 00:20:44.314 "state": "enabled", 00:20:44.314 "thread": "nvmf_tgt_poll_group_000", 00:20:44.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:44.314 "listen_address": { 00:20:44.314 "trtype": "TCP", 00:20:44.314 "adrfam": "IPv4", 00:20:44.314 "traddr": "10.0.0.2", 00:20:44.314 "trsvcid": "4420" 00:20:44.314 }, 00:20:44.314 "peer_address": { 00:20:44.314 "trtype": "TCP", 00:20:44.314 "adrfam": "IPv4", 00:20:44.314 "traddr": "10.0.0.1", 00:20:44.314 "trsvcid": "38264" 00:20:44.314 }, 00:20:44.314 "auth": { 00:20:44.314 "state": "completed", 00:20:44.314 "digest": "sha384", 00:20:44.314 "dhgroup": "ffdhe6144" 00:20:44.314 } 00:20:44.314 } 00:20:44.314 ]' 00:20:44.314 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.572 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.572 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.572 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:44.572 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.572 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.572 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.572 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.830 02:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:20:44.830 02:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:20:45.764 02:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.764 02:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.764 02:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.764 02:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.764 02:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.764 02:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.764 02:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.764 02:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:45.764 02:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:46.022 02:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:46.022 02:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.022 02:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.022 02:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:46.022 02:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:46.022 02:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.022 02:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.022 02:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.022 02:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.022 02:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.022 02:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.022 02:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.022 02:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.956 00:20:46.956 02:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.956 02:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.956 02:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.214 02:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.214 02:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.214 02:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.214 02:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.214 02:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.214 02:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.214 { 00:20:47.214 "cntlid": 89, 00:20:47.214 "qid": 0, 00:20:47.214 "state": "enabled", 00:20:47.214 "thread": "nvmf_tgt_poll_group_000", 00:20:47.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:47.214 "listen_address": { 00:20:47.214 "trtype": "TCP", 00:20:47.214 "adrfam": "IPv4", 00:20:47.214 "traddr": "10.0.0.2", 00:20:47.214 "trsvcid": "4420" 00:20:47.214 }, 00:20:47.214 "peer_address": { 00:20:47.214 "trtype": "TCP", 00:20:47.214 "adrfam": "IPv4", 00:20:47.214 "traddr": "10.0.0.1", 00:20:47.214 "trsvcid": "38290" 00:20:47.214 }, 00:20:47.214 "auth": { 00:20:47.214 "state": "completed", 00:20:47.214 "digest": "sha384", 00:20:47.214 "dhgroup": "ffdhe8192" 00:20:47.214 } 00:20:47.214 } 00:20:47.214 ]' 00:20:47.214 02:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.214 02:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.214 02:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.214 02:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:47.214 02:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.214 02:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.214 02:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.214 02:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.780 02:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:20:47.780 02:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:20:48.713 02:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.713 02:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.713 02:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.713 02:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.713 02:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.713 02:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.713 02:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:48.713 02:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:48.971 02:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:48.971 02:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.971 02:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.971 02:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:48.971 02:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:48.971 02:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.971 02:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.971 02:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.971 02:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.971 02:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.971 02:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.971 02:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.971 02:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.904 00:20:49.904 02:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.904 02:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.904 02:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.162 02:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.162 02:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.162 02:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.162 02:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.162 02:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.162 02:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.162 { 00:20:50.162 "cntlid": 91, 00:20:50.162 "qid": 0, 00:20:50.162 "state": "enabled", 00:20:50.162 "thread": "nvmf_tgt_poll_group_000", 00:20:50.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:50.162 "listen_address": { 00:20:50.162 "trtype": "TCP", 00:20:50.162 "adrfam": "IPv4", 00:20:50.162 "traddr": "10.0.0.2", 00:20:50.162 "trsvcid": "4420" 00:20:50.162 }, 00:20:50.162 "peer_address": { 00:20:50.162 "trtype": "TCP", 00:20:50.162 "adrfam": "IPv4", 00:20:50.162 "traddr": "10.0.0.1", 00:20:50.162 "trsvcid": "40156" 00:20:50.162 }, 00:20:50.162 "auth": { 00:20:50.162 "state": "completed", 00:20:50.162 "digest": "sha384", 00:20:50.162 "dhgroup": "ffdhe8192" 00:20:50.162 } 00:20:50.162 } 00:20:50.162 ]' 00:20:50.162 02:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.162 02:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.162 02:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.162 02:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:50.162 02:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.162 02:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.162 02:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.162 02:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.420 02:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:20:50.420 02:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:20:51.794 02:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.794 02:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.794 02:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.794 02:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.794 02:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.794 02:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.794 02:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.794 02:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.794 02:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:51.794 02:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.794 02:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.794 02:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:51.794 02:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:51.794 02:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.794 02:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.794 02:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.794 02:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.794 02:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.794 02:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.794 02:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.794 02:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.726 00:20:52.726 02:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.726 02:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.726 02:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.984 02:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.242 02:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.242 02:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.242 02:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.242 02:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.242 02:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.242 { 00:20:53.242 "cntlid": 93, 00:20:53.242 "qid": 0, 00:20:53.242 "state": "enabled", 00:20:53.242 "thread": "nvmf_tgt_poll_group_000", 00:20:53.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:53.242 "listen_address": { 00:20:53.242 "trtype": "TCP", 00:20:53.242 "adrfam": "IPv4", 00:20:53.242 "traddr": "10.0.0.2", 00:20:53.242 "trsvcid": "4420" 00:20:53.242 }, 00:20:53.242 "peer_address": { 00:20:53.242 "trtype": "TCP", 00:20:53.242 "adrfam": "IPv4", 00:20:53.242 "traddr": "10.0.0.1", 00:20:53.242 "trsvcid": "40190" 00:20:53.242 }, 00:20:53.242 "auth": { 00:20:53.242 "state": "completed", 00:20:53.242 "digest": "sha384", 00:20:53.242 "dhgroup": "ffdhe8192" 00:20:53.242 } 00:20:53.242 } 00:20:53.242 ]' 00:20:53.242 02:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.242 02:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.242 02:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.242 02:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:53.242 02:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.242 02:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.242 02:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.243 02:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.501 02:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:20:53.501 02:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:20:54.434 02:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.434 02:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.434 02:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.434 02:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.434 02:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.434 02:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.434 02:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.434 02:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.692 02:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:54.692 02:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.692 02:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.692 02:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:54.692 02:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:54.692 02:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.692 02:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:54.692 02:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.692 02:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.692 02:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.692 02:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:54.692 02:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.692 02:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:55.626 00:20:55.626 02:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.626 02:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.626 02:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.884 02:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.884 02:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.884 02:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.884 02:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.884 02:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.884 02:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.884 { 00:20:55.884 "cntlid": 95, 00:20:55.885 "qid": 0, 00:20:55.885 "state": "enabled", 00:20:55.885 "thread": "nvmf_tgt_poll_group_000", 00:20:55.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:55.885 "listen_address": { 00:20:55.885 "trtype": "TCP", 00:20:55.885 "adrfam": "IPv4", 00:20:55.885 "traddr": "10.0.0.2", 00:20:55.885 "trsvcid": "4420" 00:20:55.885 }, 00:20:55.885 "peer_address": { 00:20:55.885 "trtype": "TCP", 00:20:55.885 "adrfam": "IPv4", 00:20:55.885 "traddr": "10.0.0.1", 00:20:55.885 "trsvcid": "40214" 00:20:55.885 }, 00:20:55.885 "auth": { 00:20:55.885 "state": "completed", 00:20:55.885 "digest": "sha384", 00:20:55.885 "dhgroup": "ffdhe8192" 00:20:55.885 } 00:20:55.885 } 00:20:55.885 ]' 00:20:55.885 02:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.885 02:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.885 02:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.885 02:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.885 02:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.142 02:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.143 02:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.143 02:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.400 02:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:20:56.400 02:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:20:57.333 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.333 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.333 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.333 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.333 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.333 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:57.333 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.333 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.333 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:57.333 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:57.591 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:57.591 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.591 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:57.591 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:57.591 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:57.591 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.591 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.591 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.591 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.591 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.591 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.591 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.591 02:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.849 00:20:57.849 02:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.849 02:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.849 02:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.107 02:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.107 02:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.107 02:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.107 02:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.107 02:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.107 02:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.107 { 00:20:58.107 "cntlid": 97, 00:20:58.107 "qid": 0, 00:20:58.107 "state": "enabled", 00:20:58.107 "thread": "nvmf_tgt_poll_group_000", 00:20:58.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:58.107 "listen_address": { 00:20:58.107 "trtype": "TCP", 00:20:58.107 "adrfam": "IPv4", 00:20:58.107 "traddr": "10.0.0.2", 00:20:58.107 "trsvcid": "4420" 00:20:58.107 }, 00:20:58.107 "peer_address": { 00:20:58.107 "trtype": "TCP", 00:20:58.107 "adrfam": "IPv4", 00:20:58.107 "traddr": "10.0.0.1", 00:20:58.107 "trsvcid": "40232" 00:20:58.107 }, 00:20:58.107 "auth": { 00:20:58.107 "state": "completed", 00:20:58.107 "digest": "sha512", 00:20:58.107 "dhgroup": "null" 00:20:58.107 } 00:20:58.107 } 00:20:58.107 ]' 00:20:58.107 02:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.366 02:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.366 02:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.366 02:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:58.366 02:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.366 02:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.366 02:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.366 02:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.624 02:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:20:58.624 02:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:20:59.558 02:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.558 02:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.558 02:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.558 02:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.558 02:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.558 02:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.558 02:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.558 02:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.816 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:59.816 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.816 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:59.816 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:59.816 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:59.816 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.816 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.816 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.816 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.816 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.816 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.816 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.816 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.074 00:21:00.362 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.362 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.363 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.646 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.646 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.646 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.646 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.646 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.646 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.646 { 00:21:00.646 "cntlid": 99, 00:21:00.646 "qid": 0, 00:21:00.646 "state": "enabled", 00:21:00.646 "thread": "nvmf_tgt_poll_group_000", 00:21:00.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:00.646 "listen_address": { 00:21:00.646 "trtype": "TCP", 00:21:00.646 "adrfam": "IPv4", 00:21:00.646 "traddr": "10.0.0.2", 00:21:00.646 "trsvcid": "4420" 00:21:00.646 }, 00:21:00.646 "peer_address": { 00:21:00.646 "trtype": "TCP", 00:21:00.646 "adrfam": "IPv4", 00:21:00.646 "traddr": "10.0.0.1", 00:21:00.646 "trsvcid": "42344" 00:21:00.646 }, 00:21:00.646 "auth": { 00:21:00.646 "state": "completed", 00:21:00.646 "digest": "sha512", 00:21:00.646 "dhgroup": "null" 00:21:00.646 } 00:21:00.646 } 00:21:00.646 ]' 00:21:00.646 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.646 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.646 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.646 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:00.646 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.646 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.646 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.646 02:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.907 02:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:21:00.907 02:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:21:01.840 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.840 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.840 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.840 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.840 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.840 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.840 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:01.840 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:02.406 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:02.406 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.406 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:02.406 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:02.406 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:02.406 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.406 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.406 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.406 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.406 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.406 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.406 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.406 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.664 00:21:02.664 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.664 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.664 02:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.921 02:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.921 02:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.921 02:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.921 02:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.921 02:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.921 02:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.921 { 00:21:02.921 "cntlid": 101, 00:21:02.921 "qid": 0, 00:21:02.921 "state": "enabled", 00:21:02.921 "thread": "nvmf_tgt_poll_group_000", 00:21:02.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:02.921 "listen_address": { 00:21:02.921 "trtype": "TCP", 00:21:02.921 "adrfam": "IPv4", 00:21:02.921 "traddr": "10.0.0.2", 00:21:02.921 "trsvcid": "4420" 00:21:02.921 }, 00:21:02.921 "peer_address": { 00:21:02.921 "trtype": "TCP", 00:21:02.921 "adrfam": "IPv4", 00:21:02.921 "traddr": "10.0.0.1", 00:21:02.921 "trsvcid": "42366" 00:21:02.921 }, 00:21:02.921 "auth": { 00:21:02.921 "state": "completed", 00:21:02.921 "digest": "sha512", 00:21:02.921 "dhgroup": "null" 00:21:02.921 } 00:21:02.921 } 00:21:02.921 ]' 00:21:02.921 02:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.921 02:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.921 02:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.921 02:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:02.921 02:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.921 02:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.921 02:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.921 02:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.486 02:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:21:03.486 02:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:21:04.419 02:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.419 02:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.419 02:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.419 02:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.419 02:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.419 02:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.419 02:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:04.419 02:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:04.677 02:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:04.677 02:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.677 02:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:04.677 02:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:04.677 02:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:04.677 02:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.677 02:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:04.677 02:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.677 02:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.677 02:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.677 02:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:04.677 02:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.677 02:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.935 00:21:04.935 02:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.935 02:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.935 02:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.193 02:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.193 02:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.193 02:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.193 02:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.193 02:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.193 02:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.193 { 00:21:05.193 "cntlid": 103, 00:21:05.193 "qid": 0, 00:21:05.193 "state": "enabled", 00:21:05.193 "thread": "nvmf_tgt_poll_group_000", 00:21:05.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:05.193 "listen_address": { 00:21:05.193 "trtype": "TCP", 00:21:05.193 "adrfam": "IPv4", 00:21:05.193 "traddr": "10.0.0.2", 00:21:05.193 "trsvcid": "4420" 00:21:05.193 }, 00:21:05.193 "peer_address": { 00:21:05.193 "trtype": "TCP", 00:21:05.193 "adrfam": "IPv4", 00:21:05.193 "traddr": "10.0.0.1", 00:21:05.193 "trsvcid": "42394" 00:21:05.193 }, 00:21:05.193 "auth": { 00:21:05.193 "state": "completed", 00:21:05.193 "digest": "sha512", 00:21:05.193 "dhgroup": "null" 00:21:05.193 } 00:21:05.193 } 00:21:05.193 ]' 00:21:05.193 02:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.451 02:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.451 02:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.451 02:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:05.451 02:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.451 02:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.451 02:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.451 02:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.709 02:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:21:05.709 02:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:21:06.641 02:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.641 02:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.641 02:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.641 02:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.641 02:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.641 02:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.641 02:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.641 02:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.641 02:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.899 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:06.899 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.899 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.899 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:06.899 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:06.899 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.899 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.899 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.899 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.899 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.899 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.900 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.900 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.157 00:21:07.416 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.416 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.416 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.674 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.674 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.674 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.674 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.674 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.674 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.674 { 00:21:07.674 "cntlid": 105, 00:21:07.674 "qid": 0, 00:21:07.674 "state": "enabled", 00:21:07.674 "thread": "nvmf_tgt_poll_group_000", 00:21:07.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:07.674 "listen_address": { 00:21:07.674 "trtype": "TCP", 00:21:07.674 "adrfam": "IPv4", 00:21:07.674 "traddr": "10.0.0.2", 00:21:07.674 "trsvcid": "4420" 00:21:07.674 }, 00:21:07.674 "peer_address": { 00:21:07.674 "trtype": "TCP", 00:21:07.674 "adrfam": "IPv4", 00:21:07.674 "traddr": "10.0.0.1", 00:21:07.674 "trsvcid": "42426" 00:21:07.674 }, 00:21:07.674 "auth": { 00:21:07.674 "state": "completed", 00:21:07.674 "digest": "sha512", 00:21:07.674 "dhgroup": "ffdhe2048" 00:21:07.674 } 00:21:07.674 } 00:21:07.674 ]' 00:21:07.674 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.674 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.674 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.674 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:07.674 02:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.674 02:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.674 02:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.674 02:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.932 02:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:21:07.932 02:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:21:08.865 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.865 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.865 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.865 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.865 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.865 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.866 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:08.866 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:09.432 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:09.432 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.432 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:09.432 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:09.433 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:09.433 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.433 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.433 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.433 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.433 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.433 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.433 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.433 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.691 00:21:09.691 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.691 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.691 02:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.949 02:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.949 02:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.949 02:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.949 02:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.949 02:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.949 02:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.949 { 00:21:09.949 "cntlid": 107, 00:21:09.949 "qid": 0, 00:21:09.949 "state": "enabled", 00:21:09.949 "thread": "nvmf_tgt_poll_group_000", 00:21:09.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:09.949 "listen_address": { 00:21:09.949 "trtype": "TCP", 00:21:09.949 "adrfam": "IPv4", 00:21:09.949 "traddr": "10.0.0.2", 00:21:09.949 "trsvcid": "4420" 00:21:09.949 }, 00:21:09.949 "peer_address": { 00:21:09.949 "trtype": "TCP", 00:21:09.949 "adrfam": "IPv4", 00:21:09.949 "traddr": "10.0.0.1", 00:21:09.949 "trsvcid": "38166" 00:21:09.949 }, 00:21:09.949 "auth": { 00:21:09.949 "state": "completed", 00:21:09.949 "digest": "sha512", 00:21:09.949 "dhgroup": "ffdhe2048" 00:21:09.949 } 00:21:09.949 } 00:21:09.949 ]' 00:21:09.949 02:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.949 02:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.949 02:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.949 02:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:09.949 02:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.949 02:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.949 02:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.949 02:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.207 02:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:21:10.207 02:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:21:11.140 02:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.140 02:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.140 02:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.140 02:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.140 02:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.140 02:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.140 02:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:11.140 02:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:11.707 02:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:11.707 02:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.707 02:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.707 02:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:11.707 02:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:11.707 02:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.707 02:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.707 02:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.707 02:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.707 02:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.707 02:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.707 02:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.707 02:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.965 00:21:11.965 02:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.965 02:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.965 02:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.223 02:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.223 02:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.223 02:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.223 02:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.223 02:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.223 02:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.223 { 00:21:12.223 "cntlid": 109, 00:21:12.223 "qid": 0, 00:21:12.223 "state": "enabled", 00:21:12.223 "thread": "nvmf_tgt_poll_group_000", 00:21:12.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:12.223 "listen_address": { 00:21:12.223 "trtype": "TCP", 00:21:12.223 "adrfam": "IPv4", 00:21:12.223 "traddr": "10.0.0.2", 00:21:12.223 "trsvcid": "4420" 00:21:12.223 }, 00:21:12.223 "peer_address": { 00:21:12.223 "trtype": "TCP", 00:21:12.223 "adrfam": "IPv4", 00:21:12.223 "traddr": "10.0.0.1", 00:21:12.223 "trsvcid": "38190" 00:21:12.223 }, 00:21:12.223 "auth": { 00:21:12.223 "state": "completed", 00:21:12.223 "digest": "sha512", 00:21:12.223 "dhgroup": "ffdhe2048" 00:21:12.223 } 00:21:12.223 } 00:21:12.223 ]' 00:21:12.223 02:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.223 02:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.223 02:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.223 02:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:12.223 02:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.223 02:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.223 02:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.223 02:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.481 02:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:21:12.481 02:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:21:13.855 02:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.855 02:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.855 02:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.855 02:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.855 02:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.855 02:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.855 02:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.855 02:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.855 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:13.855 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.855 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.855 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:13.855 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:13.855 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.855 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:13.855 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.855 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.855 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.855 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:13.855 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:13.855 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:14.421 00:21:14.421 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.421 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.421 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.679 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.679 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.679 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.679 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.679 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.679 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.679 { 00:21:14.679 "cntlid": 111, 00:21:14.679 "qid": 0, 00:21:14.679 "state": "enabled", 00:21:14.679 "thread": "nvmf_tgt_poll_group_000", 00:21:14.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:14.679 "listen_address": { 00:21:14.679 "trtype": "TCP", 00:21:14.679 "adrfam": "IPv4", 00:21:14.679 "traddr": "10.0.0.2", 00:21:14.679 "trsvcid": "4420" 00:21:14.679 }, 00:21:14.679 "peer_address": { 00:21:14.679 "trtype": "TCP", 00:21:14.679 "adrfam": "IPv4", 00:21:14.679 "traddr": "10.0.0.1", 00:21:14.679 "trsvcid": "38204" 00:21:14.679 }, 00:21:14.679 "auth": { 00:21:14.679 "state": "completed", 00:21:14.679 "digest": "sha512", 00:21:14.679 "dhgroup": "ffdhe2048" 00:21:14.679 } 00:21:14.679 } 00:21:14.679 ]' 00:21:14.679 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.679 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.679 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.679 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:14.679 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.679 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.679 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.680 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.937 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:21:14.937 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:21:16.311 02:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.311 02:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.311 02:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.311 02:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.311 02:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.311 02:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:16.311 02:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.311 02:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:16.311 02:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:16.311 02:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:16.311 02:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.311 02:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:16.311 02:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:16.311 02:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:16.311 02:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.311 02:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.311 02:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.311 02:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.311 02:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.312 02:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.312 02:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.312 02:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.569 00:21:16.569 02:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.570 02:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.570 02:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.828 02:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.828 02:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.828 02:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.828 02:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.086 02:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.086 02:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.086 { 00:21:17.086 "cntlid": 113, 00:21:17.086 "qid": 0, 00:21:17.086 "state": "enabled", 00:21:17.086 "thread": "nvmf_tgt_poll_group_000", 00:21:17.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:17.086 "listen_address": { 00:21:17.086 "trtype": "TCP", 00:21:17.086 "adrfam": "IPv4", 00:21:17.086 "traddr": "10.0.0.2", 00:21:17.086 "trsvcid": "4420" 00:21:17.086 }, 00:21:17.086 "peer_address": { 00:21:17.086 "trtype": "TCP", 00:21:17.086 "adrfam": "IPv4", 00:21:17.086 "traddr": "10.0.0.1", 00:21:17.086 "trsvcid": "38228" 00:21:17.086 }, 00:21:17.086 "auth": { 00:21:17.086 "state": "completed", 00:21:17.086 "digest": "sha512", 00:21:17.086 "dhgroup": "ffdhe3072" 00:21:17.086 } 00:21:17.086 } 00:21:17.086 ]' 00:21:17.086 02:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.086 02:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.086 02:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.086 02:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:17.086 02:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.086 02:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.086 02:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.086 02:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.345 02:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:21:17.345 02:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:21:18.278 02:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.278 02:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.278 02:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.278 02:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.278 02:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.278 02:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.278 02:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:18.278 02:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:18.536 02:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:18.536 02:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.536 02:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.536 02:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:18.536 02:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:18.536 02:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.536 02:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.536 02:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.536 02:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.536 02:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.536 02:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.536 02:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.536 02:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.115 00:21:19.115 02:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.115 02:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.115 02:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.116 02:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.116 02:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.116 02:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.116 02:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.116 02:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.379 02:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.379 { 00:21:19.379 "cntlid": 115, 00:21:19.379 "qid": 0, 00:21:19.379 "state": "enabled", 00:21:19.379 "thread": "nvmf_tgt_poll_group_000", 00:21:19.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:19.379 "listen_address": { 00:21:19.379 "trtype": "TCP", 00:21:19.379 "adrfam": "IPv4", 00:21:19.379 "traddr": "10.0.0.2", 00:21:19.379 "trsvcid": "4420" 00:21:19.379 }, 00:21:19.379 "peer_address": { 00:21:19.379 "trtype": "TCP", 00:21:19.379 "adrfam": "IPv4", 00:21:19.379 "traddr": "10.0.0.1", 00:21:19.379 "trsvcid": "54614" 00:21:19.379 }, 00:21:19.379 "auth": { 00:21:19.379 "state": "completed", 00:21:19.379 "digest": "sha512", 00:21:19.379 "dhgroup": "ffdhe3072" 00:21:19.379 } 00:21:19.379 } 00:21:19.379 ]' 00:21:19.379 02:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.379 02:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.379 02:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.379 02:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:19.379 02:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.379 02:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.379 02:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.379 02:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.637 02:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:21:19.637 02:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:21:20.571 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.571 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.571 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.571 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.571 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.571 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.571 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:20.571 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:20.829 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:20.829 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.829 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.829 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:20.829 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:20.829 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.829 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.829 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.829 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.829 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.829 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.829 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.829 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.395 00:21:21.395 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.395 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.395 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.653 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.653 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.653 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.653 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.653 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.653 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.653 { 00:21:21.653 "cntlid": 117, 00:21:21.653 "qid": 0, 00:21:21.653 "state": "enabled", 00:21:21.653 "thread": "nvmf_tgt_poll_group_000", 00:21:21.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:21.653 "listen_address": { 00:21:21.653 "trtype": "TCP", 00:21:21.653 "adrfam": "IPv4", 00:21:21.653 "traddr": "10.0.0.2", 00:21:21.653 "trsvcid": "4420" 00:21:21.653 }, 00:21:21.653 "peer_address": { 00:21:21.653 "trtype": "TCP", 00:21:21.653 "adrfam": "IPv4", 00:21:21.653 "traddr": "10.0.0.1", 00:21:21.653 "trsvcid": "54634" 00:21:21.653 }, 00:21:21.653 "auth": { 00:21:21.653 "state": "completed", 00:21:21.653 "digest": "sha512", 00:21:21.653 "dhgroup": "ffdhe3072" 00:21:21.653 } 00:21:21.653 } 00:21:21.653 ]' 00:21:21.653 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.653 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.653 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.654 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:21.654 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.654 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.654 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.654 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.912 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:21:21.912 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:21:22.846 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.846 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.846 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.846 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.104 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.104 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.104 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:23.104 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:23.362 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:23.362 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.362 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.362 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:23.362 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:23.362 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.362 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:23.362 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.362 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.362 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.362 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:23.362 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:23.362 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:23.621 00:21:23.621 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.621 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.621 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.879 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.879 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.879 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.879 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.879 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.879 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.879 { 00:21:23.879 "cntlid": 119, 00:21:23.879 "qid": 0, 00:21:23.879 "state": "enabled", 00:21:23.879 "thread": "nvmf_tgt_poll_group_000", 00:21:23.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:23.879 "listen_address": { 00:21:23.879 "trtype": "TCP", 00:21:23.879 "adrfam": "IPv4", 00:21:23.879 "traddr": "10.0.0.2", 00:21:23.879 "trsvcid": "4420" 00:21:23.879 }, 00:21:23.879 "peer_address": { 00:21:23.879 "trtype": "TCP", 00:21:23.879 "adrfam": "IPv4", 00:21:23.879 "traddr": "10.0.0.1", 00:21:23.879 "trsvcid": "54648" 00:21:23.879 }, 00:21:23.879 "auth": { 00:21:23.879 "state": "completed", 00:21:23.879 "digest": "sha512", 00:21:23.879 "dhgroup": "ffdhe3072" 00:21:23.879 } 00:21:23.879 } 00:21:23.879 ]' 00:21:23.879 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.879 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.880 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.880 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:23.880 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.137 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.137 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.137 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.395 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:21:24.395 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:21:25.330 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.330 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.330 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.330 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.330 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.330 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:25.330 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.330 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:25.330 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:25.588 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:25.588 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.588 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.588 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:25.588 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:25.588 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.588 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.588 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.588 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.588 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.588 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.588 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.588 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.155 00:21:26.155 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.155 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.155 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.155 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.155 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.155 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.155 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.414 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.414 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.414 { 00:21:26.414 "cntlid": 121, 00:21:26.414 "qid": 0, 00:21:26.414 "state": "enabled", 00:21:26.414 "thread": "nvmf_tgt_poll_group_000", 00:21:26.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:26.414 "listen_address": { 00:21:26.414 "trtype": "TCP", 00:21:26.414 "adrfam": "IPv4", 00:21:26.414 "traddr": "10.0.0.2", 00:21:26.414 "trsvcid": "4420" 00:21:26.414 }, 00:21:26.414 "peer_address": { 00:21:26.414 "trtype": "TCP", 00:21:26.414 "adrfam": "IPv4", 00:21:26.414 "traddr": "10.0.0.1", 00:21:26.414 "trsvcid": "54660" 00:21:26.414 }, 00:21:26.414 "auth": { 00:21:26.414 "state": "completed", 00:21:26.414 "digest": "sha512", 00:21:26.414 "dhgroup": "ffdhe4096" 00:21:26.414 } 00:21:26.414 } 00:21:26.414 ]' 00:21:26.414 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.414 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.414 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.414 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:26.414 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.414 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.414 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.414 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.673 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:21:26.673 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:21:27.607 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.608 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.608 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.608 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.608 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.608 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.608 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:27.608 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:27.865 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:27.865 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.865 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:27.865 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:27.865 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:27.865 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.865 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.865 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.865 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.865 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.865 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.865 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.865 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.438 00:21:28.438 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.438 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.438 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.696 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.696 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.696 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.696 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.696 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.696 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.696 { 00:21:28.696 "cntlid": 123, 00:21:28.696 "qid": 0, 00:21:28.696 "state": "enabled", 00:21:28.696 "thread": "nvmf_tgt_poll_group_000", 00:21:28.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:28.696 "listen_address": { 00:21:28.696 "trtype": "TCP", 00:21:28.696 "adrfam": "IPv4", 00:21:28.696 "traddr": "10.0.0.2", 00:21:28.696 "trsvcid": "4420" 00:21:28.696 }, 00:21:28.696 "peer_address": { 00:21:28.696 "trtype": "TCP", 00:21:28.696 "adrfam": "IPv4", 00:21:28.696 "traddr": "10.0.0.1", 00:21:28.696 "trsvcid": "54702" 00:21:28.696 }, 00:21:28.696 "auth": { 00:21:28.696 "state": "completed", 00:21:28.696 "digest": "sha512", 00:21:28.696 "dhgroup": "ffdhe4096" 00:21:28.696 } 00:21:28.696 } 00:21:28.696 ]' 00:21:28.696 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.696 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.696 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.696 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:28.696 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.696 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.696 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.696 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.261 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:21:29.261 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:21:30.201 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.201 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.201 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.201 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.201 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.201 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.201 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:30.201 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:30.520 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:30.520 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.520 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:30.520 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:30.520 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:30.520 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.520 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.520 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.520 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.520 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.520 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.520 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.520 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.792 00:21:30.792 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.792 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.792 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.049 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.049 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.049 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.049 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.049 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.049 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.049 { 00:21:31.049 "cntlid": 125, 00:21:31.049 "qid": 0, 00:21:31.049 "state": "enabled", 00:21:31.049 "thread": "nvmf_tgt_poll_group_000", 00:21:31.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:31.049 "listen_address": { 00:21:31.049 "trtype": "TCP", 00:21:31.049 "adrfam": "IPv4", 00:21:31.049 "traddr": "10.0.0.2", 00:21:31.049 "trsvcid": "4420" 00:21:31.049 }, 00:21:31.049 "peer_address": { 00:21:31.049 "trtype": "TCP", 00:21:31.049 "adrfam": "IPv4", 00:21:31.049 "traddr": "10.0.0.1", 00:21:31.049 "trsvcid": "51358" 00:21:31.049 }, 00:21:31.049 "auth": { 00:21:31.049 "state": "completed", 00:21:31.049 "digest": "sha512", 00:21:31.049 "dhgroup": "ffdhe4096" 00:21:31.049 } 00:21:31.049 } 00:21:31.049 ]' 00:21:31.049 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.049 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.050 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.307 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:31.307 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.307 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.307 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.307 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.565 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:21:31.565 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:21:32.499 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.499 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.499 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.499 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.499 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.499 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.499 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:32.499 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:32.759 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:32.759 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.759 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.759 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:32.759 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:32.759 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.760 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:32.760 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.760 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.760 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.760 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:32.760 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.760 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.325 00:21:33.325 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.325 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.325 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.583 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.583 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.583 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.583 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.583 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.583 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.583 { 00:21:33.583 "cntlid": 127, 00:21:33.583 "qid": 0, 00:21:33.583 "state": "enabled", 00:21:33.583 "thread": "nvmf_tgt_poll_group_000", 00:21:33.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:33.583 "listen_address": { 00:21:33.583 "trtype": "TCP", 00:21:33.583 "adrfam": "IPv4", 00:21:33.583 "traddr": "10.0.0.2", 00:21:33.583 "trsvcid": "4420" 00:21:33.583 }, 00:21:33.583 "peer_address": { 00:21:33.583 "trtype": "TCP", 00:21:33.583 "adrfam": "IPv4", 00:21:33.583 "traddr": "10.0.0.1", 00:21:33.583 "trsvcid": "51378" 00:21:33.583 }, 00:21:33.583 "auth": { 00:21:33.583 "state": "completed", 00:21:33.583 "digest": "sha512", 00:21:33.583 "dhgroup": "ffdhe4096" 00:21:33.583 } 00:21:33.583 } 00:21:33.583 ]' 00:21:33.583 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.583 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.583 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.583 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:33.583 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.583 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.583 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.583 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.842 02:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:21:33.842 02:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:21:34.774 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.032 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.032 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.032 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.032 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.032 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.033 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.033 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:35.033 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:35.290 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:35.290 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.290 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.290 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:35.290 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:35.290 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.290 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.290 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.290 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.290 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.290 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.290 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.290 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.856 00:21:35.856 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.856 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.856 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.114 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.114 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.114 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.114 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.114 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.114 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.114 { 00:21:36.114 "cntlid": 129, 00:21:36.114 "qid": 0, 00:21:36.114 "state": "enabled", 00:21:36.114 "thread": "nvmf_tgt_poll_group_000", 00:21:36.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:36.114 "listen_address": { 00:21:36.114 "trtype": "TCP", 00:21:36.114 "adrfam": "IPv4", 00:21:36.114 "traddr": "10.0.0.2", 00:21:36.114 "trsvcid": "4420" 00:21:36.114 }, 00:21:36.114 "peer_address": { 00:21:36.114 "trtype": "TCP", 00:21:36.114 "adrfam": "IPv4", 00:21:36.114 "traddr": "10.0.0.1", 00:21:36.114 "trsvcid": "51408" 00:21:36.114 }, 00:21:36.114 "auth": { 00:21:36.114 "state": "completed", 00:21:36.114 "digest": "sha512", 00:21:36.114 "dhgroup": "ffdhe6144" 00:21:36.114 } 00:21:36.114 } 00:21:36.114 ]' 00:21:36.114 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.114 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.114 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.114 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:36.114 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.114 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.114 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.114 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.679 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:21:36.680 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:21:37.613 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.613 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.613 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.613 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.613 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.613 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.613 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:37.613 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:37.871 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:37.871 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.871 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.871 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:37.871 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:37.871 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.871 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.871 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.871 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.871 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.871 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.871 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.871 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.438 00:21:38.438 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.438 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.438 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.696 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.696 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.696 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.696 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.696 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.696 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.696 { 00:21:38.696 "cntlid": 131, 00:21:38.696 "qid": 0, 00:21:38.696 "state": "enabled", 00:21:38.696 "thread": "nvmf_tgt_poll_group_000", 00:21:38.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:38.696 "listen_address": { 00:21:38.696 "trtype": "TCP", 00:21:38.696 "adrfam": "IPv4", 00:21:38.696 "traddr": "10.0.0.2", 00:21:38.696 "trsvcid": "4420" 00:21:38.696 }, 00:21:38.696 "peer_address": { 00:21:38.696 "trtype": "TCP", 00:21:38.696 "adrfam": "IPv4", 00:21:38.696 "traddr": "10.0.0.1", 00:21:38.696 "trsvcid": "51440" 00:21:38.696 }, 00:21:38.696 "auth": { 00:21:38.696 "state": "completed", 00:21:38.696 "digest": "sha512", 00:21:38.696 "dhgroup": "ffdhe6144" 00:21:38.696 } 00:21:38.696 } 00:21:38.696 ]' 00:21:38.696 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.696 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.696 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.696 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:38.696 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.954 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.954 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.954 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.212 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:21:39.212 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:21:40.143 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.144 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.144 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.144 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.144 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.144 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.144 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.144 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.401 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:40.401 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.401 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.401 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:40.401 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:40.401 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.401 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.401 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.401 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.401 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.401 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.401 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.401 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.967 00:21:40.967 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.967 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.967 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.225 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.225 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.225 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.225 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.225 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.225 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.225 { 00:21:41.225 "cntlid": 133, 00:21:41.225 "qid": 0, 00:21:41.225 "state": "enabled", 00:21:41.225 "thread": "nvmf_tgt_poll_group_000", 00:21:41.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:41.225 "listen_address": { 00:21:41.225 "trtype": "TCP", 00:21:41.225 "adrfam": "IPv4", 00:21:41.225 "traddr": "10.0.0.2", 00:21:41.225 "trsvcid": "4420" 00:21:41.225 }, 00:21:41.225 "peer_address": { 00:21:41.225 "trtype": "TCP", 00:21:41.225 "adrfam": "IPv4", 00:21:41.225 "traddr": "10.0.0.1", 00:21:41.225 "trsvcid": "54400" 00:21:41.225 }, 00:21:41.225 "auth": { 00:21:41.225 "state": "completed", 00:21:41.225 "digest": "sha512", 00:21:41.225 "dhgroup": "ffdhe6144" 00:21:41.225 } 00:21:41.225 } 00:21:41.225 ]' 00:21:41.225 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.225 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.225 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.225 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:41.225 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.225 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.225 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.226 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.795 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:21:41.795 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:21:42.731 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.731 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.731 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.731 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.731 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.731 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.731 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.731 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.989 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:42.989 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.989 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.989 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:42.989 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:42.989 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.989 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:42.989 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.989 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.989 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.989 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:42.989 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:42.989 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.555 00:21:43.555 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.555 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.555 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.813 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.813 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.813 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.813 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.813 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.813 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.813 { 00:21:43.813 "cntlid": 135, 00:21:43.813 "qid": 0, 00:21:43.813 "state": "enabled", 00:21:43.813 "thread": "nvmf_tgt_poll_group_000", 00:21:43.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:43.813 "listen_address": { 00:21:43.813 "trtype": "TCP", 00:21:43.813 "adrfam": "IPv4", 00:21:43.813 "traddr": "10.0.0.2", 00:21:43.813 "trsvcid": "4420" 00:21:43.813 }, 00:21:43.813 "peer_address": { 00:21:43.813 "trtype": "TCP", 00:21:43.813 "adrfam": "IPv4", 00:21:43.813 "traddr": "10.0.0.1", 00:21:43.813 "trsvcid": "54426" 00:21:43.813 }, 00:21:43.813 "auth": { 00:21:43.813 "state": "completed", 00:21:43.813 "digest": "sha512", 00:21:43.813 "dhgroup": "ffdhe6144" 00:21:43.813 } 00:21:43.813 } 00:21:43.813 ]' 00:21:43.813 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.813 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.813 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.813 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:43.813 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.813 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.813 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.813 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.070 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:21:44.070 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:21:45.003 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.003 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.003 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.003 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.003 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.003 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:45.003 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.003 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:45.003 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:45.568 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:45.568 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.568 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.568 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:45.568 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:45.568 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.568 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.568 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.568 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.568 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.568 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.568 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.568 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.502 00:21:46.502 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.502 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.502 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.502 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.502 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.502 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.502 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.502 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.502 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.502 { 00:21:46.502 "cntlid": 137, 00:21:46.502 "qid": 0, 00:21:46.502 "state": "enabled", 00:21:46.502 "thread": "nvmf_tgt_poll_group_000", 00:21:46.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:46.502 "listen_address": { 00:21:46.502 "trtype": "TCP", 00:21:46.502 "adrfam": "IPv4", 00:21:46.502 "traddr": "10.0.0.2", 00:21:46.502 "trsvcid": "4420" 00:21:46.502 }, 00:21:46.502 "peer_address": { 00:21:46.502 "trtype": "TCP", 00:21:46.502 "adrfam": "IPv4", 00:21:46.502 "traddr": "10.0.0.1", 00:21:46.502 "trsvcid": "54458" 00:21:46.502 }, 00:21:46.502 "auth": { 00:21:46.502 "state": "completed", 00:21:46.502 "digest": "sha512", 00:21:46.502 "dhgroup": "ffdhe8192" 00:21:46.502 } 00:21:46.502 } 00:21:46.502 ]' 00:21:46.502 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.760 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.760 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.760 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:46.760 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.760 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.760 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.760 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.018 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:21:47.018 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:21:47.952 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.952 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.952 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.952 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.952 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.952 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.952 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:47.952 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.518 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:48.518 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.518 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.518 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:48.518 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:48.518 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.518 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.518 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.518 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.518 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.518 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.518 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.518 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.084 00:21:49.341 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.341 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.341 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.599 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.599 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.599 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.599 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.599 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.599 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.599 { 00:21:49.599 "cntlid": 139, 00:21:49.599 "qid": 0, 00:21:49.599 "state": "enabled", 00:21:49.599 "thread": "nvmf_tgt_poll_group_000", 00:21:49.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:49.599 "listen_address": { 00:21:49.599 "trtype": "TCP", 00:21:49.599 "adrfam": "IPv4", 00:21:49.599 "traddr": "10.0.0.2", 00:21:49.599 "trsvcid": "4420" 00:21:49.599 }, 00:21:49.599 "peer_address": { 00:21:49.599 "trtype": "TCP", 00:21:49.599 "adrfam": "IPv4", 00:21:49.599 "traddr": "10.0.0.1", 00:21:49.599 "trsvcid": "43742" 00:21:49.599 }, 00:21:49.599 "auth": { 00:21:49.599 "state": "completed", 00:21:49.599 "digest": "sha512", 00:21:49.599 "dhgroup": "ffdhe8192" 00:21:49.599 } 00:21:49.599 } 00:21:49.599 ]' 00:21:49.599 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.599 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.599 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.599 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:49.599 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.599 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.599 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.599 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.856 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:21:49.857 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: --dhchap-ctrl-secret DHHC-1:02:ZTg0OTgxZmRmOTNhNmJlYTRkNmE4OWNhYWQ4ZDk4NjNkZDE0ODA3MTBkYmExMmI0tM1ZcA==: 00:21:51.229 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.229 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.229 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.229 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.229 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.229 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.229 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.229 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.229 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:51.229 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.229 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.229 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:51.229 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:51.229 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.229 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.229 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.229 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.229 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.229 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.229 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.229 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.162 00:21:52.162 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.162 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.162 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.420 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.420 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.420 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.420 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.420 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.420 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.420 { 00:21:52.420 "cntlid": 141, 00:21:52.420 "qid": 0, 00:21:52.420 "state": "enabled", 00:21:52.420 "thread": "nvmf_tgt_poll_group_000", 00:21:52.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:52.420 "listen_address": { 00:21:52.420 "trtype": "TCP", 00:21:52.420 "adrfam": "IPv4", 00:21:52.420 "traddr": "10.0.0.2", 00:21:52.420 "trsvcid": "4420" 00:21:52.420 }, 00:21:52.420 "peer_address": { 00:21:52.420 "trtype": "TCP", 00:21:52.420 "adrfam": "IPv4", 00:21:52.420 "traddr": "10.0.0.1", 00:21:52.420 "trsvcid": "43770" 00:21:52.420 }, 00:21:52.420 "auth": { 00:21:52.420 "state": "completed", 00:21:52.420 "digest": "sha512", 00:21:52.420 "dhgroup": "ffdhe8192" 00:21:52.420 } 00:21:52.420 } 00:21:52.420 ]' 00:21:52.421 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.421 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.421 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.421 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:52.421 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.421 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.421 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.421 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.986 02:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:21:52.986 02:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:01:NzQyYWQ5OGYzYWYxNTc3MzUyZDNkNGE4NTFiZjg1ZWQG4h4Q: 00:21:53.919 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.919 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.919 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.919 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.919 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.919 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.919 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:53.919 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.177 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:54.177 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.177 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:54.177 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:54.177 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:54.177 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.177 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:54.177 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.177 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.177 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.177 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:54.177 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.177 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.109 00:21:55.109 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.109 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.109 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.366 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.366 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.366 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.366 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.366 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.366 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.366 { 00:21:55.366 "cntlid": 143, 00:21:55.366 "qid": 0, 00:21:55.366 "state": "enabled", 00:21:55.366 "thread": "nvmf_tgt_poll_group_000", 00:21:55.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:55.366 "listen_address": { 00:21:55.366 "trtype": "TCP", 00:21:55.366 "adrfam": "IPv4", 00:21:55.366 "traddr": "10.0.0.2", 00:21:55.366 "trsvcid": "4420" 00:21:55.366 }, 00:21:55.366 "peer_address": { 00:21:55.366 "trtype": "TCP", 00:21:55.366 "adrfam": "IPv4", 00:21:55.366 "traddr": "10.0.0.1", 00:21:55.366 "trsvcid": "43788" 00:21:55.366 }, 00:21:55.366 "auth": { 00:21:55.366 "state": "completed", 00:21:55.366 "digest": "sha512", 00:21:55.366 "dhgroup": "ffdhe8192" 00:21:55.366 } 00:21:55.366 } 00:21:55.366 ]' 00:21:55.366 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.366 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.366 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.366 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:55.366 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.366 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.366 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.366 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.624 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:21:55.624 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:21:56.558 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.558 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.558 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.558 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.558 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.558 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:56.558 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:56.558 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:56.558 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:56.558 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:56.558 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:57.123 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:57.123 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.123 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.123 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:57.123 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:57.123 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.123 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.123 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.123 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.123 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.123 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.123 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.123 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.064 00:21:58.064 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.064 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.064 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.329 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.329 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.329 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.329 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.329 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.329 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.329 { 00:21:58.329 "cntlid": 145, 00:21:58.329 "qid": 0, 00:21:58.329 "state": "enabled", 00:21:58.329 "thread": "nvmf_tgt_poll_group_000", 00:21:58.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:58.329 "listen_address": { 00:21:58.329 "trtype": "TCP", 00:21:58.329 "adrfam": "IPv4", 00:21:58.329 "traddr": "10.0.0.2", 00:21:58.329 "trsvcid": "4420" 00:21:58.329 }, 00:21:58.329 "peer_address": { 00:21:58.329 "trtype": "TCP", 00:21:58.329 "adrfam": "IPv4", 00:21:58.329 "traddr": "10.0.0.1", 00:21:58.329 "trsvcid": "43816" 00:21:58.329 }, 00:21:58.329 "auth": { 00:21:58.329 "state": "completed", 00:21:58.329 "digest": "sha512", 00:21:58.329 "dhgroup": "ffdhe8192" 00:21:58.329 } 00:21:58.329 } 00:21:58.329 ]' 00:21:58.329 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.329 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.329 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.329 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:58.329 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.329 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.329 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.329 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.586 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:21:58.586 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YjMxZTFhMGQzNzZjZjhkZDQ5OTQ0MDU5NzFkMWMxNjY5NTEyNmU2ZTJjMTE1MzQ5sTdJQw==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2MmEwMzNkMTNlMTkwNTUzYThmYzNjMWUzMGZmMzQwYjY2NmE3Njc3OTQ5YWU5MTcwYTdmOGEzNmNmNTEzY+Io4Fg=: 00:21:59.520 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.520 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.520 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.520 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.778 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.778 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:59.778 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.778 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.778 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.778 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:59.778 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:59.778 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:59.778 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:59.778 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:59.778 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:59.778 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:59.778 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:59.778 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:59.778 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:00.711 request: 00:22:00.711 { 00:22:00.711 "name": "nvme0", 00:22:00.711 "trtype": "tcp", 00:22:00.711 "traddr": "10.0.0.2", 00:22:00.711 "adrfam": "ipv4", 00:22:00.711 "trsvcid": "4420", 00:22:00.711 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:00.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:00.711 "prchk_reftag": false, 00:22:00.711 "prchk_guard": false, 00:22:00.711 "hdgst": false, 00:22:00.711 "ddgst": false, 00:22:00.711 "dhchap_key": "key2", 00:22:00.711 "allow_unrecognized_csi": false, 00:22:00.711 "method": "bdev_nvme_attach_controller", 00:22:00.711 "req_id": 1 00:22:00.711 } 00:22:00.711 Got JSON-RPC error response 00:22:00.711 response: 00:22:00.711 { 00:22:00.711 "code": -5, 00:22:00.711 "message": "Input/output error" 00:22:00.711 } 00:22:00.711 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:00.711 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:00.711 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:00.711 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:00.711 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.711 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.711 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.711 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.711 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.711 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.711 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.711 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.711 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:00.711 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:00.711 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:00.711 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:00.711 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.711 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:00.711 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.711 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:00.711 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:00.711 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:01.326 request: 00:22:01.326 { 00:22:01.326 "name": "nvme0", 00:22:01.326 "trtype": "tcp", 00:22:01.326 "traddr": "10.0.0.2", 00:22:01.326 "adrfam": "ipv4", 00:22:01.326 "trsvcid": "4420", 00:22:01.326 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:01.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:01.326 "prchk_reftag": false, 00:22:01.326 "prchk_guard": false, 00:22:01.326 "hdgst": false, 00:22:01.326 "ddgst": false, 00:22:01.326 "dhchap_key": "key1", 00:22:01.326 "dhchap_ctrlr_key": "ckey2", 00:22:01.326 "allow_unrecognized_csi": false, 00:22:01.326 "method": "bdev_nvme_attach_controller", 00:22:01.326 "req_id": 1 00:22:01.326 } 00:22:01.326 Got JSON-RPC error response 00:22:01.326 response: 00:22:01.326 { 00:22:01.326 "code": -5, 00:22:01.326 "message": "Input/output error" 00:22:01.326 } 00:22:01.326 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:01.326 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:01.326 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:01.326 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:01.326 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.326 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.326 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.326 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.326 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:01.326 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.326 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.326 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.326 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.326 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:01.326 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.326 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:01.326 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.326 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:01.326 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.326 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.326 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.326 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.274 request: 00:22:02.274 { 00:22:02.274 "name": "nvme0", 00:22:02.274 "trtype": "tcp", 00:22:02.274 "traddr": "10.0.0.2", 00:22:02.274 "adrfam": "ipv4", 00:22:02.274 "trsvcid": "4420", 00:22:02.274 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:02.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:02.274 "prchk_reftag": false, 00:22:02.274 "prchk_guard": false, 00:22:02.274 "hdgst": false, 00:22:02.274 "ddgst": false, 00:22:02.274 "dhchap_key": "key1", 00:22:02.274 "dhchap_ctrlr_key": "ckey1", 00:22:02.274 "allow_unrecognized_csi": false, 00:22:02.274 "method": "bdev_nvme_attach_controller", 00:22:02.274 "req_id": 1 00:22:02.274 } 00:22:02.274 Got JSON-RPC error response 00:22:02.274 response: 00:22:02.274 { 00:22:02.274 "code": -5, 00:22:02.274 "message": "Input/output error" 00:22:02.274 } 00:22:02.274 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:02.274 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:02.274 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:02.274 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:02.274 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.275 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.275 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.275 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.275 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2962747 00:22:02.275 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2962747 ']' 00:22:02.275 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2962747 00:22:02.275 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:02.275 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.275 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2962747 00:22:02.275 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:02.275 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:02.275 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2962747' 00:22:02.275 killing process with pid 2962747 00:22:02.275 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2962747 00:22:02.275 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2962747 00:22:03.648 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:03.648 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:03.648 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:03.648 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.648 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2986836 00:22:03.648 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:03.648 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2986836 00:22:03.648 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2986836 ']' 00:22:03.648 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.648 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:03.648 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.648 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:03.648 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.583 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.583 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:04.583 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:04.583 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:04.583 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.583 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.583 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:04.583 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2986836 00:22:04.583 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2986836 ']' 00:22:04.583 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.583 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.583 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.583 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.583 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.841 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.841 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:04.841 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:04.841 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.841 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.407 null0 00:22:05.407 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.407 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:05.407 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DVw 00:22:05.407 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.407 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.407 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.WK9 ]] 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WK9 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.fXn 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.1E1 ]] 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1E1 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.rT0 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.UT2 ]] 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UT2 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.A01 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.408 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.783 nvme0n1 00:22:06.783 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.783 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.783 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.040 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.040 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.040 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.040 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.040 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.040 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.040 { 00:22:07.040 "cntlid": 1, 00:22:07.040 "qid": 0, 00:22:07.040 "state": "enabled", 00:22:07.040 "thread": "nvmf_tgt_poll_group_000", 00:22:07.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:07.040 "listen_address": { 00:22:07.040 "trtype": "TCP", 00:22:07.040 "adrfam": "IPv4", 00:22:07.040 "traddr": "10.0.0.2", 00:22:07.040 "trsvcid": "4420" 00:22:07.040 }, 00:22:07.040 "peer_address": { 00:22:07.040 "trtype": "TCP", 00:22:07.040 "adrfam": "IPv4", 00:22:07.040 "traddr": "10.0.0.1", 00:22:07.040 "trsvcid": "55862" 00:22:07.040 }, 00:22:07.040 "auth": { 00:22:07.040 "state": "completed", 00:22:07.040 "digest": "sha512", 00:22:07.040 "dhgroup": "ffdhe8192" 00:22:07.040 } 00:22:07.040 } 00:22:07.040 ]' 00:22:07.040 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.040 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.040 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.298 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:07.298 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.298 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.298 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.298 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.556 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:22:07.556 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:22:08.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:08.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:08.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:08.748 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:08.748 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:08.748 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:08.748 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:08.748 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.748 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:08.748 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.748 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:08.748 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:08.748 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.006 request: 00:22:09.006 { 00:22:09.006 "name": "nvme0", 00:22:09.006 "trtype": "tcp", 00:22:09.006 "traddr": "10.0.0.2", 00:22:09.006 "adrfam": "ipv4", 00:22:09.006 "trsvcid": "4420", 00:22:09.006 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:09.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:09.006 "prchk_reftag": false, 00:22:09.006 "prchk_guard": false, 00:22:09.006 "hdgst": false, 00:22:09.006 "ddgst": false, 00:22:09.006 "dhchap_key": "key3", 00:22:09.006 "allow_unrecognized_csi": false, 00:22:09.006 "method": "bdev_nvme_attach_controller", 00:22:09.006 "req_id": 1 00:22:09.006 } 00:22:09.006 Got JSON-RPC error response 00:22:09.006 response: 00:22:09.006 { 00:22:09.006 "code": -5, 00:22:09.006 "message": "Input/output error" 00:22:09.006 } 00:22:09.006 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:09.006 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:09.006 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:09.006 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:09.006 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:09.006 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:09.006 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:09.006 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:09.572 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:09.572 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:09.572 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:09.572 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:09.572 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.572 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:09.572 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.572 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:09.572 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.572 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.829 request: 00:22:09.829 { 00:22:09.829 "name": "nvme0", 00:22:09.829 "trtype": "tcp", 00:22:09.829 "traddr": "10.0.0.2", 00:22:09.829 "adrfam": "ipv4", 00:22:09.829 "trsvcid": "4420", 00:22:09.829 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:09.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:09.829 "prchk_reftag": false, 00:22:09.829 "prchk_guard": false, 00:22:09.829 "hdgst": false, 00:22:09.829 "ddgst": false, 00:22:09.829 "dhchap_key": "key3", 00:22:09.829 "allow_unrecognized_csi": false, 00:22:09.829 "method": "bdev_nvme_attach_controller", 00:22:09.829 "req_id": 1 00:22:09.829 } 00:22:09.829 Got JSON-RPC error response 00:22:09.829 response: 00:22:09.829 { 00:22:09.829 "code": -5, 00:22:09.829 "message": "Input/output error" 00:22:09.829 } 00:22:09.829 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:09.829 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:09.829 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:09.829 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:09.830 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:09.830 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:09.830 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:09.830 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:09.830 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:09.830 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:10.089 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.089 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.089 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.089 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.089 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.089 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.089 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.089 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.089 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:10.089 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:10.089 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:10.089 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:10.089 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:10.089 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:10.089 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:10.089 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:10.089 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:10.089 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:10.655 request: 00:22:10.655 { 00:22:10.655 "name": "nvme0", 00:22:10.655 "trtype": "tcp", 00:22:10.655 "traddr": "10.0.0.2", 00:22:10.655 "adrfam": "ipv4", 00:22:10.655 "trsvcid": "4420", 00:22:10.655 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:10.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:10.655 "prchk_reftag": false, 00:22:10.655 "prchk_guard": false, 00:22:10.655 "hdgst": false, 00:22:10.655 "ddgst": false, 00:22:10.655 "dhchap_key": "key0", 00:22:10.655 "dhchap_ctrlr_key": "key1", 00:22:10.655 "allow_unrecognized_csi": false, 00:22:10.655 "method": "bdev_nvme_attach_controller", 00:22:10.655 "req_id": 1 00:22:10.655 } 00:22:10.655 Got JSON-RPC error response 00:22:10.655 response: 00:22:10.655 { 00:22:10.655 "code": -5, 00:22:10.655 "message": "Input/output error" 00:22:10.655 } 00:22:10.655 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:10.655 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:10.655 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:10.655 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:10.655 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:10.655 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:10.655 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:10.921 nvme0n1 00:22:10.921 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:10.921 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:10.921 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.183 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.183 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.183 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.441 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:11.441 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.441 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.441 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.441 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:11.441 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:11.441 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:12.815 nvme0n1 00:22:13.073 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:13.073 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:13.073 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.331 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.331 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:13.331 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.331 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.331 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.331 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:13.331 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.331 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:13.590 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.590 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:22:13.590 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: --dhchap-ctrl-secret DHHC-1:03:MmZkZTY4ZThjMzY1MTljMjIzODU3NDdiMjY3MGY0M2I0YThlM2I3NTQ3MDY2OWU2MjdiM2FkYjg2M2E2YWMyMcK4Uoo=: 00:22:14.524 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:14.524 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:14.524 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:14.524 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:14.524 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:14.524 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:14.524 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:14.524 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.524 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.782 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:14.782 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:14.782 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:14.782 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:14.782 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.782 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:14.782 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.782 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:14.782 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:14.782 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:15.714 request: 00:22:15.714 { 00:22:15.714 "name": "nvme0", 00:22:15.714 "trtype": "tcp", 00:22:15.714 "traddr": "10.0.0.2", 00:22:15.714 "adrfam": "ipv4", 00:22:15.714 "trsvcid": "4420", 00:22:15.714 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:15.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:15.714 "prchk_reftag": false, 00:22:15.714 "prchk_guard": false, 00:22:15.714 "hdgst": false, 00:22:15.714 "ddgst": false, 00:22:15.714 "dhchap_key": "key1", 00:22:15.714 "allow_unrecognized_csi": false, 00:22:15.714 "method": "bdev_nvme_attach_controller", 00:22:15.714 "req_id": 1 00:22:15.714 } 00:22:15.714 Got JSON-RPC error response 00:22:15.714 response: 00:22:15.714 { 00:22:15.714 "code": -5, 00:22:15.714 "message": "Input/output error" 00:22:15.714 } 00:22:15.714 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:15.714 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:15.714 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:15.714 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:15.714 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:15.714 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:15.714 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:17.086 nvme0n1 00:22:17.086 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:17.086 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:17.086 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.392 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.392 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.392 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.650 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.650 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.650 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.650 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.650 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:17.650 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:17.650 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:17.908 nvme0n1 00:22:18.166 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:18.166 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:18.166 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.423 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.423 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.423 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.681 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:18.681 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.681 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.681 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.681 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: '' 2s 00:22:18.681 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:18.681 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:18.681 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: 00:22:18.681 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:18.681 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:18.681 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:18.681 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: ]] 00:22:18.681 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:N2RmMjUwYzg4YmM4ZGRiNTk3OWYyZTEzOWM5Y2VmY2bnNpH/: 00:22:18.681 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:18.681 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:18.681 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: 2s 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: ]] 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MjgzZmJmNzZkNDQ2N2JmMzk1NGIwZDQ1MjMwMjVhMzVhNTBkMmIxNWFmNTMxYjJmEq20Pg==: 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:20.580 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:23.110 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:23.110 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:23.110 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:23.110 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:23.110 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:23.110 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:23.110 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:23.110 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.110 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:23.110 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.110 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.110 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.110 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:23.110 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:23.110 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:24.485 nvme0n1 00:22:24.485 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:24.485 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.485 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.485 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.485 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:24.485 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:25.050 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:25.050 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:25.050 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.308 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.308 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.308 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.308 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.308 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.308 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:25.308 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:25.566 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:25.566 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.566 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:25.824 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.824 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:25.824 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.824 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.824 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.824 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:25.824 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:25.824 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:25.824 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:25.824 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:25.824 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:25.824 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:25.824 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:25.824 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:26.758 request: 00:22:26.758 { 00:22:26.758 "name": "nvme0", 00:22:26.758 "dhchap_key": "key1", 00:22:26.758 "dhchap_ctrlr_key": "key3", 00:22:26.758 "method": "bdev_nvme_set_keys", 00:22:26.758 "req_id": 1 00:22:26.758 } 00:22:26.758 Got JSON-RPC error response 00:22:26.758 response: 00:22:26.758 { 00:22:26.758 "code": -13, 00:22:26.758 "message": "Permission denied" 00:22:26.758 } 00:22:26.758 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:26.758 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:26.758 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:26.758 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:26.758 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:26.758 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:26.758 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.324 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:27.324 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:28.256 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:28.256 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:28.256 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.513 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:28.513 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:28.513 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.513 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.513 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.513 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:28.513 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:28.513 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:29.887 nvme0n1 00:22:29.887 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:29.887 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.887 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.887 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.887 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:29.887 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:29.887 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:29.887 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:29.887 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.887 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:29.887 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.887 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:29.887 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:30.821 request: 00:22:30.821 { 00:22:30.821 "name": "nvme0", 00:22:30.821 "dhchap_key": "key2", 00:22:30.821 "dhchap_ctrlr_key": "key0", 00:22:30.821 "method": "bdev_nvme_set_keys", 00:22:30.821 "req_id": 1 00:22:30.821 } 00:22:30.821 Got JSON-RPC error response 00:22:30.821 response: 00:22:30.821 { 00:22:30.821 "code": -13, 00:22:30.821 "message": "Permission denied" 00:22:30.821 } 00:22:30.821 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:30.821 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:30.821 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:30.821 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:30.821 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:30.821 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.821 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:31.078 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:31.078 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:32.450 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:32.450 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:32.450 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.450 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:32.450 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:32.450 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:32.450 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2962902 00:22:32.450 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2962902 ']' 00:22:32.450 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2962902 00:22:32.450 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:32.450 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.450 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2962902 00:22:32.450 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:32.450 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:32.450 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2962902' 00:22:32.450 killing process with pid 2962902 00:22:32.450 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2962902 00:22:32.450 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2962902 00:22:35.037 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:35.037 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:35.037 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:35.037 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:35.037 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:35.037 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:35.037 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:35.037 rmmod nvme_tcp 00:22:35.037 rmmod nvme_fabrics 00:22:35.037 rmmod nvme_keyring 00:22:35.037 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:35.037 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:35.037 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:35.037 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2986836 ']' 00:22:35.037 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2986836 00:22:35.037 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2986836 ']' 00:22:35.037 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2986836 00:22:35.037 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:35.037 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.037 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2986836 00:22:35.037 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:35.037 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:35.037 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2986836' 00:22:35.037 killing process with pid 2986836 00:22:35.037 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2986836 00:22:35.037 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2986836 00:22:35.971 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:35.971 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:35.971 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:35.971 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:35.971 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:35.971 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:35.971 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:35.971 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:35.971 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:35.971 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.971 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.971 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.872 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:37.872 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.DVw /tmp/spdk.key-sha256.fXn /tmp/spdk.key-sha384.rT0 /tmp/spdk.key-sha512.A01 /tmp/spdk.key-sha512.WK9 /tmp/spdk.key-sha384.1E1 /tmp/spdk.key-sha256.UT2 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:37.872 00:22:37.872 real 3m46.838s 00:22:37.872 user 8m46.644s 00:22:37.872 sys 0m27.833s 00:22:37.872 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.872 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.872 ************************************ 00:22:37.872 END TEST nvmf_auth_target 00:22:37.872 ************************************ 00:22:37.872 02:42:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:37.872 02:42:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:37.872 02:42:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:37.872 02:42:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:37.872 02:42:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:38.131 ************************************ 00:22:38.131 START TEST nvmf_bdevio_no_huge 00:22:38.131 ************************************ 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:38.132 * Looking for test storage... 00:22:38.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:38.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.132 --rc genhtml_branch_coverage=1 00:22:38.132 --rc genhtml_function_coverage=1 00:22:38.132 --rc genhtml_legend=1 00:22:38.132 --rc geninfo_all_blocks=1 00:22:38.132 --rc geninfo_unexecuted_blocks=1 00:22:38.132 00:22:38.132 ' 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:38.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.132 --rc genhtml_branch_coverage=1 00:22:38.132 --rc genhtml_function_coverage=1 00:22:38.132 --rc genhtml_legend=1 00:22:38.132 --rc geninfo_all_blocks=1 00:22:38.132 --rc geninfo_unexecuted_blocks=1 00:22:38.132 00:22:38.132 ' 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:38.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.132 --rc genhtml_branch_coverage=1 00:22:38.132 --rc genhtml_function_coverage=1 00:22:38.132 --rc genhtml_legend=1 00:22:38.132 --rc geninfo_all_blocks=1 00:22:38.132 --rc geninfo_unexecuted_blocks=1 00:22:38.132 00:22:38.132 ' 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:38.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.132 --rc genhtml_branch_coverage=1 00:22:38.132 --rc genhtml_function_coverage=1 00:22:38.132 --rc genhtml_legend=1 00:22:38.132 --rc geninfo_all_blocks=1 00:22:38.132 --rc geninfo_unexecuted_blocks=1 00:22:38.132 00:22:38.132 ' 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.132 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.133 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:38.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:38.133 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:38.133 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:38.133 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:38.133 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:38.133 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:38.133 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:38.133 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:38.133 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.133 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:38.133 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:38.133 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:38.133 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.133 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.133 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.133 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:38.133 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:38.133 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:38.133 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.034 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.034 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.034 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.034 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.034 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:40.035 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:40.035 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.035 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.293 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.293 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.293 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.293 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.293 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:40.294 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:40.294 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:22:40.294 00:22:40.294 --- 10.0.0.2 ping statistics --- 00:22:40.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.294 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:22:40.294 00:22:40.294 --- 10.0.0.1 ping statistics --- 00:22:40.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.294 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2992716 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2992716 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2992716 ']' 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.294 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.294 [2024-11-17 02:42:48.728506] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:40.294 [2024-11-17 02:42:48.728641] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:40.553 [2024-11-17 02:42:48.891020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:40.812 [2024-11-17 02:42:49.037238] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.812 [2024-11-17 02:42:49.037330] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.812 [2024-11-17 02:42:49.037356] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.812 [2024-11-17 02:42:49.037381] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.812 [2024-11-17 02:42:49.037401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.812 [2024-11-17 02:42:49.039558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:40.812 [2024-11-17 02:42:49.039617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:40.812 [2024-11-17 02:42:49.039666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.812 [2024-11-17 02:42:49.039673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:41.378 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.378 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:41.378 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:41.378 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:41.378 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:41.378 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.378 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:41.378 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.378 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:41.378 [2024-11-17 02:42:49.798452] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.379 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.379 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:41.379 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.379 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:41.637 Malloc0 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:41.637 [2024-11-17 02:42:49.890091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:41.637 { 00:22:41.637 "params": { 00:22:41.637 "name": "Nvme$subsystem", 00:22:41.637 "trtype": "$TEST_TRANSPORT", 00:22:41.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.637 "adrfam": "ipv4", 00:22:41.637 "trsvcid": "$NVMF_PORT", 00:22:41.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.637 "hdgst": ${hdgst:-false}, 00:22:41.637 "ddgst": ${ddgst:-false} 00:22:41.637 }, 00:22:41.637 "method": "bdev_nvme_attach_controller" 00:22:41.637 } 00:22:41.637 EOF 00:22:41.637 )") 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:41.637 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:41.637 "params": { 00:22:41.637 "name": "Nvme1", 00:22:41.637 "trtype": "tcp", 00:22:41.637 "traddr": "10.0.0.2", 00:22:41.637 "adrfam": "ipv4", 00:22:41.637 "trsvcid": "4420", 00:22:41.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:41.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:41.637 "hdgst": false, 00:22:41.637 "ddgst": false 00:22:41.637 }, 00:22:41.637 "method": "bdev_nvme_attach_controller" 00:22:41.637 }' 00:22:41.637 [2024-11-17 02:42:49.976952] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:41.637 [2024-11-17 02:42:49.977138] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2992873 ] 00:22:41.895 [2024-11-17 02:42:50.136678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:41.895 [2024-11-17 02:42:50.278571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.895 [2024-11-17 02:42:50.278613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.895 [2024-11-17 02:42:50.278622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.464 I/O targets: 00:22:42.464 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:42.464 00:22:42.464 00:22:42.464 CUnit - A unit testing framework for C - Version 2.1-3 00:22:42.464 http://cunit.sourceforge.net/ 00:22:42.464 00:22:42.464 00:22:42.464 Suite: bdevio tests on: Nvme1n1 00:22:42.722 Test: blockdev write read block ...passed 00:22:42.722 Test: blockdev write zeroes read block ...passed 00:22:42.722 Test: blockdev write zeroes read no split ...passed 00:22:42.722 Test: blockdev write zeroes read split ...passed 00:22:42.722 Test: blockdev write zeroes read split partial ...passed 00:22:42.722 Test: blockdev reset ...[2024-11-17 02:42:51.056997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:42.722 [2024-11-17 02:42:51.057212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f1100 (9): Bad file descriptor 00:22:42.722 [2024-11-17 02:42:51.074408] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:42.722 passed 00:22:42.722 Test: blockdev write read 8 blocks ...passed 00:22:42.722 Test: blockdev write read size > 128k ...passed 00:22:42.722 Test: blockdev write read invalid size ...passed 00:22:42.722 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:42.722 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:42.722 Test: blockdev write read max offset ...passed 00:22:42.981 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:42.981 Test: blockdev writev readv 8 blocks ...passed 00:22:42.981 Test: blockdev writev readv 30 x 1block ...passed 00:22:42.981 Test: blockdev writev readv block ...passed 00:22:42.981 Test: blockdev writev readv size > 128k ...passed 00:22:42.981 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:42.981 Test: blockdev comparev and writev ...[2024-11-17 02:42:51.294594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:42.981 [2024-11-17 02:42:51.294680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.981 [2024-11-17 02:42:51.294720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:42.981 [2024-11-17 02:42:51.294747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.981 [2024-11-17 02:42:51.295235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:42.981 [2024-11-17 02:42:51.295270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:42.981 [2024-11-17 02:42:51.295304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:42.981 [2024-11-17 02:42:51.295329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:42.981 [2024-11-17 02:42:51.295794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:42.981 [2024-11-17 02:42:51.295828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:42.981 [2024-11-17 02:42:51.295861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:42.981 [2024-11-17 02:42:51.295886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:42.981 [2024-11-17 02:42:51.296355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:42.981 [2024-11-17 02:42:51.296387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:42.981 [2024-11-17 02:42:51.296421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:42.981 [2024-11-17 02:42:51.296445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:42.981 passed 00:22:42.981 Test: blockdev nvme passthru rw ...passed 00:22:42.981 Test: blockdev nvme passthru vendor specific ...[2024-11-17 02:42:51.379566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:42.982 [2024-11-17 02:42:51.379629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:42.982 [2024-11-17 02:42:51.379880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:42.982 [2024-11-17 02:42:51.379913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:42.982 [2024-11-17 02:42:51.380121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:42.982 [2024-11-17 02:42:51.380154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:42.982 [2024-11-17 02:42:51.380351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:42.982 [2024-11-17 02:42:51.380394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:42.982 passed 00:22:42.982 Test: blockdev nvme admin passthru ...passed 00:22:42.982 Test: blockdev copy ...passed 00:22:42.982 00:22:42.982 Run Summary: Type Total Ran Passed Failed Inactive 00:22:42.982 suites 1 1 n/a 0 0 00:22:42.982 tests 23 23 23 0 0 00:22:42.982 asserts 152 152 152 0 n/a 00:22:42.982 00:22:42.982 Elapsed time = 1.082 seconds 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:43.917 rmmod nvme_tcp 00:22:43.917 rmmod nvme_fabrics 00:22:43.917 rmmod nvme_keyring 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2992716 ']' 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2992716 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2992716 ']' 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2992716 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2992716 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2992716' 00:22:43.917 killing process with pid 2992716 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2992716 00:22:43.917 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2992716 00:22:44.853 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:44.853 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:44.854 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:44.854 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:44.854 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:44.854 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:44.854 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:44.854 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:44.854 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:44.854 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.854 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.854 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.755 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:46.755 00:22:46.755 real 0m8.742s 00:22:46.755 user 0m20.547s 00:22:46.755 sys 0m2.831s 00:22:46.755 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.755 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:46.755 ************************************ 00:22:46.755 END TEST nvmf_bdevio_no_huge 00:22:46.755 ************************************ 00:22:46.755 02:42:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:46.755 02:42:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:46.755 02:42:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.755 02:42:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:46.755 ************************************ 00:22:46.755 START TEST nvmf_tls 00:22:46.755 ************************************ 00:22:46.755 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:46.755 * Looking for test storage... 00:22:46.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:46.755 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:46.755 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:22:46.755 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:47.013 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:47.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.014 --rc genhtml_branch_coverage=1 00:22:47.014 --rc genhtml_function_coverage=1 00:22:47.014 --rc genhtml_legend=1 00:22:47.014 --rc geninfo_all_blocks=1 00:22:47.014 --rc geninfo_unexecuted_blocks=1 00:22:47.014 00:22:47.014 ' 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:47.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.014 --rc genhtml_branch_coverage=1 00:22:47.014 --rc genhtml_function_coverage=1 00:22:47.014 --rc genhtml_legend=1 00:22:47.014 --rc geninfo_all_blocks=1 00:22:47.014 --rc geninfo_unexecuted_blocks=1 00:22:47.014 00:22:47.014 ' 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:47.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.014 --rc genhtml_branch_coverage=1 00:22:47.014 --rc genhtml_function_coverage=1 00:22:47.014 --rc genhtml_legend=1 00:22:47.014 --rc geninfo_all_blocks=1 00:22:47.014 --rc geninfo_unexecuted_blocks=1 00:22:47.014 00:22:47.014 ' 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:47.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.014 --rc genhtml_branch_coverage=1 00:22:47.014 --rc genhtml_function_coverage=1 00:22:47.014 --rc genhtml_legend=1 00:22:47.014 --rc geninfo_all_blocks=1 00:22:47.014 --rc geninfo_unexecuted_blocks=1 00:22:47.014 00:22:47.014 ' 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:47.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:47.014 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.918 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.918 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:48.918 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:48.918 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:48.918 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:48.918 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:48.918 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:48.918 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:48.918 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:48.918 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:48.918 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:48.918 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:48.918 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:48.918 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:48.918 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:48.918 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.918 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.918 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.918 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.918 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:49.178 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:49.178 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:49.178 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:49.178 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:49.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:22:49.178 00:22:49.178 --- 10.0.0.2 ping statistics --- 00:22:49.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.178 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:22:49.178 00:22:49.178 --- 10.0.0.1 ping statistics --- 00:22:49.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.178 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:49.178 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:49.179 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:49.179 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:49.179 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.179 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.179 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2995110 00:22:49.179 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:49.179 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2995110 00:22:49.179 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2995110 ']' 00:22:49.179 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.179 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.179 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.179 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.179 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.437 [2024-11-17 02:42:57.651319] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:49.437 [2024-11-17 02:42:57.651482] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.438 [2024-11-17 02:42:57.796698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.696 [2024-11-17 02:42:57.927623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.696 [2024-11-17 02:42:57.927711] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.696 [2024-11-17 02:42:57.927737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.696 [2024-11-17 02:42:57.927762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.696 [2024-11-17 02:42:57.927781] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.696 [2024-11-17 02:42:57.929436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.262 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:50.262 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:50.262 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:50.262 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:50.262 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.262 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.262 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:50.262 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:50.520 true 00:22:50.778 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:50.778 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:51.035 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:51.035 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:51.035 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:51.293 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:51.293 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:51.551 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:51.551 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:51.551 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:51.809 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:51.809 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:52.067 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:52.067 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:52.067 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:52.067 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:52.325 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:52.325 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:52.325 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:52.892 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:52.892 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:52.892 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:52.892 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:52.892 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:53.150 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:53.150 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:53.408 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:53.408 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:53.408 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:53.408 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:53.408 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:53.408 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:53.408 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:53.408 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:53.408 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:53.666 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:53.666 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:53.666 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:53.666 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:53.666 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:53.666 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:53.666 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:53.666 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:53.666 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:53.666 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:53.666 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.iVdewDSrbk 00:22:53.666 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:53.666 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.LFIn2WKkYx 00:22:53.666 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:53.666 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:53.666 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.iVdewDSrbk 00:22:53.666 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.LFIn2WKkYx 00:22:53.666 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:53.924 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:54.490 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.iVdewDSrbk 00:22:54.490 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.iVdewDSrbk 00:22:54.490 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:54.748 [2024-11-17 02:43:03.161449] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.748 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:55.006 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:55.264 [2024-11-17 02:43:03.698899] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:55.264 [2024-11-17 02:43:03.699321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.264 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:55.831 malloc0 00:22:55.831 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:56.089 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.iVdewDSrbk 00:22:56.347 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:56.606 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.iVdewDSrbk 00:23:08.805 Initializing NVMe Controllers 00:23:08.805 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:08.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:08.805 Initialization complete. Launching workers. 00:23:08.805 ======================================================== 00:23:08.805 Latency(us) 00:23:08.805 Device Information : IOPS MiB/s Average min max 00:23:08.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5607.14 21.90 11419.12 2279.03 12770.41 00:23:08.805 ======================================================== 00:23:08.805 Total : 5607.14 21.90 11419.12 2279.03 12770.41 00:23:08.805 00:23:08.805 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iVdewDSrbk 00:23:08.805 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:08.805 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:08.805 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:08.805 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iVdewDSrbk 00:23:08.805 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.805 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2997247 00:23:08.805 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.805 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2997247 /var/tmp/bdevperf.sock 00:23:08.805 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:08.805 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2997247 ']' 00:23:08.805 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.805 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.805 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.805 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.805 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.805 [2024-11-17 02:43:15.258740] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:08.805 [2024-11-17 02:43:15.258878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2997247 ] 00:23:08.805 [2024-11-17 02:43:15.391722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.805 [2024-11-17 02:43:15.514886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.805 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.805 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:08.805 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iVdewDSrbk 00:23:08.805 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:08.805 [2024-11-17 02:43:16.723550] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.805 TLSTESTn1 00:23:08.805 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:08.805 Running I/O for 10 seconds... 00:23:10.675 2384.00 IOPS, 9.31 MiB/s [2024-11-17T01:43:20.068Z] 2449.00 IOPS, 9.57 MiB/s [2024-11-17T01:43:20.998Z] 2470.00 IOPS, 9.65 MiB/s [2024-11-17T01:43:22.371Z] 2477.75 IOPS, 9.68 MiB/s [2024-11-17T01:43:23.306Z] 2487.00 IOPS, 9.71 MiB/s [2024-11-17T01:43:24.299Z] 2494.67 IOPS, 9.74 MiB/s [2024-11-17T01:43:25.234Z] 2492.71 IOPS, 9.74 MiB/s [2024-11-17T01:43:26.168Z] 2494.50 IOPS, 9.74 MiB/s [2024-11-17T01:43:27.104Z] 2494.78 IOPS, 9.75 MiB/s [2024-11-17T01:43:27.104Z] 2497.40 IOPS, 9.76 MiB/s 00:23:18.644 Latency(us) 00:23:18.644 [2024-11-17T01:43:27.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.644 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:18.644 Verification LBA range: start 0x0 length 0x2000 00:23:18.644 TLSTESTn1 : 10.03 2503.83 9.78 0.00 0.00 51030.22 9903.22 50875.35 00:23:18.644 [2024-11-17T01:43:27.104Z] =================================================================================================================== 00:23:18.644 [2024-11-17T01:43:27.104Z] Total : 2503.83 9.78 0.00 0.00 51030.22 9903.22 50875.35 00:23:18.644 { 00:23:18.644 "results": [ 00:23:18.644 { 00:23:18.644 "job": "TLSTESTn1", 00:23:18.644 "core_mask": "0x4", 00:23:18.644 "workload": "verify", 00:23:18.644 "status": "finished", 00:23:18.644 "verify_range": { 00:23:18.644 "start": 0, 00:23:18.644 "length": 8192 00:23:18.644 }, 00:23:18.644 "queue_depth": 128, 00:23:18.644 "io_size": 4096, 00:23:18.644 "runtime": 10.025442, 00:23:18.644 "iops": 2503.829756333935, 00:23:18.644 "mibps": 9.780584985679434, 00:23:18.644 "io_failed": 0, 00:23:18.644 "io_timeout": 0, 00:23:18.644 "avg_latency_us": 51030.219248635934, 00:23:18.644 "min_latency_us": 9903.217777777778, 00:23:18.644 "max_latency_us": 50875.35407407407 00:23:18.644 } 00:23:18.644 ], 00:23:18.644 "core_count": 1 00:23:18.644 } 00:23:18.644 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:18.644 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2997247 00:23:18.644 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2997247 ']' 00:23:18.644 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2997247 00:23:18.644 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:18.644 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.644 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2997247 00:23:18.644 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:18.644 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:18.644 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2997247' 00:23:18.644 killing process with pid 2997247 00:23:18.644 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2997247 00:23:18.644 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.644 00:23:18.644 Latency(us) 00:23:18.644 [2024-11-17T01:43:27.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.644 [2024-11-17T01:43:27.104Z] =================================================================================================================== 00:23:18.644 [2024-11-17T01:43:27.104Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.644 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2997247 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LFIn2WKkYx 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LFIn2WKkYx 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LFIn2WKkYx 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.LFIn2WKkYx 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2998696 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2998696 /var/tmp/bdevperf.sock 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2998696 ']' 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.579 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.579 [2024-11-17 02:43:27.968386] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:19.579 [2024-11-17 02:43:27.968517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2998696 ] 00:23:19.838 [2024-11-17 02:43:28.100250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.838 [2024-11-17 02:43:28.219142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.772 02:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.772 02:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:20.772 02:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LFIn2WKkYx 00:23:21.029 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:21.029 [2024-11-17 02:43:29.483908] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:21.287 [2024-11-17 02:43:29.494633] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:21.287 [2024-11-17 02:43:29.495387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:21.288 [2024-11-17 02:43:29.496347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:21.288 [2024-11-17 02:43:29.497340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:21.288 [2024-11-17 02:43:29.497397] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:21.288 [2024-11-17 02:43:29.497421] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:21.288 [2024-11-17 02:43:29.497469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:21.288 request: 00:23:21.288 { 00:23:21.288 "name": "TLSTEST", 00:23:21.288 "trtype": "tcp", 00:23:21.288 "traddr": "10.0.0.2", 00:23:21.288 "adrfam": "ipv4", 00:23:21.288 "trsvcid": "4420", 00:23:21.288 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.288 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:21.288 "prchk_reftag": false, 00:23:21.288 "prchk_guard": false, 00:23:21.288 "hdgst": false, 00:23:21.288 "ddgst": false, 00:23:21.288 "psk": "key0", 00:23:21.288 "allow_unrecognized_csi": false, 00:23:21.288 "method": "bdev_nvme_attach_controller", 00:23:21.288 "req_id": 1 00:23:21.288 } 00:23:21.288 Got JSON-RPC error response 00:23:21.288 response: 00:23:21.288 { 00:23:21.288 "code": -5, 00:23:21.288 "message": "Input/output error" 00:23:21.288 } 00:23:21.288 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2998696 00:23:21.288 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2998696 ']' 00:23:21.288 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2998696 00:23:21.288 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:21.288 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.288 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2998696 00:23:21.288 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:21.288 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:21.288 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2998696' 00:23:21.288 killing process with pid 2998696 00:23:21.288 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2998696 00:23:21.288 Received shutdown signal, test time was about 10.000000 seconds 00:23:21.288 00:23:21.288 Latency(us) 00:23:21.288 [2024-11-17T01:43:29.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.288 [2024-11-17T01:43:29.748Z] =================================================================================================================== 00:23:21.288 [2024-11-17T01:43:29.748Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:21.288 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2998696 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iVdewDSrbk 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iVdewDSrbk 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iVdewDSrbk 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iVdewDSrbk 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2998980 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2998980 /var/tmp/bdevperf.sock 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2998980 ']' 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.223 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.223 [2024-11-17 02:43:30.458441] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:22.223 [2024-11-17 02:43:30.458594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2998980 ] 00:23:22.223 [2024-11-17 02:43:30.591596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.481 [2024-11-17 02:43:30.714733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.049 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.049 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:23.049 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iVdewDSrbk 00:23:23.307 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:23.565 [2024-11-17 02:43:31.973867] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:23.565 [2024-11-17 02:43:31.987457] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:23.565 [2024-11-17 02:43:31.987511] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:23.565 [2024-11-17 02:43:31.987582] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:23.565 [2024-11-17 02:43:31.988025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:23.565 [2024-11-17 02:43:31.989013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:23.565 [2024-11-17 02:43:31.990002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:23.565 [2024-11-17 02:43:31.990031] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:23.565 [2024-11-17 02:43:31.990074] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:23.565 [2024-11-17 02:43:31.990143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:23.565 request: 00:23:23.565 { 00:23:23.565 "name": "TLSTEST", 00:23:23.565 "trtype": "tcp", 00:23:23.565 "traddr": "10.0.0.2", 00:23:23.565 "adrfam": "ipv4", 00:23:23.565 "trsvcid": "4420", 00:23:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.566 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:23.566 "prchk_reftag": false, 00:23:23.566 "prchk_guard": false, 00:23:23.566 "hdgst": false, 00:23:23.566 "ddgst": false, 00:23:23.566 "psk": "key0", 00:23:23.566 "allow_unrecognized_csi": false, 00:23:23.566 "method": "bdev_nvme_attach_controller", 00:23:23.566 "req_id": 1 00:23:23.566 } 00:23:23.566 Got JSON-RPC error response 00:23:23.566 response: 00:23:23.566 { 00:23:23.566 "code": -5, 00:23:23.566 "message": "Input/output error" 00:23:23.566 } 00:23:23.566 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2998980 00:23:23.566 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2998980 ']' 00:23:23.566 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2998980 00:23:23.566 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:23.566 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:23.566 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2998980 00:23:23.824 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:23.824 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:23.824 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2998980' 00:23:23.824 killing process with pid 2998980 00:23:23.824 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2998980 00:23:23.824 Received shutdown signal, test time was about 10.000000 seconds 00:23:23.824 00:23:23.824 Latency(us) 00:23:23.824 [2024-11-17T01:43:32.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.824 [2024-11-17T01:43:32.284Z] =================================================================================================================== 00:23:23.824 [2024-11-17T01:43:32.284Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:23.824 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2998980 00:23:24.759 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:24.759 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:24.759 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:24.759 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:24.759 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iVdewDSrbk 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iVdewDSrbk 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iVdewDSrbk 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iVdewDSrbk 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2999253 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2999253 /var/tmp/bdevperf.sock 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2999253 ']' 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.760 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.760 [2024-11-17 02:43:32.963385] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:24.760 [2024-11-17 02:43:32.963530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2999253 ] 00:23:24.760 [2024-11-17 02:43:33.094779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.760 [2024-11-17 02:43:33.214895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.694 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.694 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:25.694 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iVdewDSrbk 00:23:25.952 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:26.210 [2024-11-17 02:43:34.504900] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:26.210 [2024-11-17 02:43:34.514606] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:26.210 [2024-11-17 02:43:34.514644] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:26.210 [2024-11-17 02:43:34.514731] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:26.210 [2024-11-17 02:43:34.514772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:26.210 [2024-11-17 02:43:34.515744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:26.210 [2024-11-17 02:43:34.516745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:26.210 [2024-11-17 02:43:34.516772] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:26.210 [2024-11-17 02:43:34.516813] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:26.210 [2024-11-17 02:43:34.516841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:26.210 request: 00:23:26.210 { 00:23:26.210 "name": "TLSTEST", 00:23:26.210 "trtype": "tcp", 00:23:26.210 "traddr": "10.0.0.2", 00:23:26.210 "adrfam": "ipv4", 00:23:26.210 "trsvcid": "4420", 00:23:26.210 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:26.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.210 "prchk_reftag": false, 00:23:26.210 "prchk_guard": false, 00:23:26.210 "hdgst": false, 00:23:26.210 "ddgst": false, 00:23:26.210 "psk": "key0", 00:23:26.211 "allow_unrecognized_csi": false, 00:23:26.211 "method": "bdev_nvme_attach_controller", 00:23:26.211 "req_id": 1 00:23:26.211 } 00:23:26.211 Got JSON-RPC error response 00:23:26.211 response: 00:23:26.211 { 00:23:26.211 "code": -5, 00:23:26.211 "message": "Input/output error" 00:23:26.211 } 00:23:26.211 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2999253 00:23:26.211 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2999253 ']' 00:23:26.211 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2999253 00:23:26.211 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:26.211 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:26.211 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2999253 00:23:26.211 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:26.211 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:26.211 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2999253' 00:23:26.211 killing process with pid 2999253 00:23:26.211 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2999253 00:23:26.211 Received shutdown signal, test time was about 10.000000 seconds 00:23:26.211 00:23:26.211 Latency(us) 00:23:26.211 [2024-11-17T01:43:34.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.211 [2024-11-17T01:43:34.671Z] =================================================================================================================== 00:23:26.211 [2024-11-17T01:43:34.671Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:26.211 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2999253 00:23:27.145 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:27.145 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:27.145 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:27.145 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:27.145 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:27.145 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:27.145 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:27.145 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:27.145 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:27.145 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:27.146 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:27.146 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:27.146 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:27.146 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:27.146 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:27.146 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:27.146 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:27.146 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:27.146 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2999532 00:23:27.146 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:27.146 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:27.146 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2999532 /var/tmp/bdevperf.sock 00:23:27.146 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2999532 ']' 00:23:27.146 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.146 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.146 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.146 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.146 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.146 [2024-11-17 02:43:35.454668] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:27.146 [2024-11-17 02:43:35.454792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2999532 ] 00:23:27.146 [2024-11-17 02:43:35.590335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.404 [2024-11-17 02:43:35.709166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.338 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.338 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:28.338 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:28.338 [2024-11-17 02:43:36.736262] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:28.338 [2024-11-17 02:43:36.736327] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:28.338 request: 00:23:28.338 { 00:23:28.338 "name": "key0", 00:23:28.338 "path": "", 00:23:28.338 "method": "keyring_file_add_key", 00:23:28.338 "req_id": 1 00:23:28.338 } 00:23:28.338 Got JSON-RPC error response 00:23:28.338 response: 00:23:28.338 { 00:23:28.339 "code": -1, 00:23:28.339 "message": "Operation not permitted" 00:23:28.339 } 00:23:28.339 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:28.596 [2024-11-17 02:43:37.037221] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:28.597 [2024-11-17 02:43:37.037290] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:28.597 request: 00:23:28.597 { 00:23:28.597 "name": "TLSTEST", 00:23:28.597 "trtype": "tcp", 00:23:28.597 "traddr": "10.0.0.2", 00:23:28.597 "adrfam": "ipv4", 00:23:28.597 "trsvcid": "4420", 00:23:28.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:28.597 "prchk_reftag": false, 00:23:28.597 "prchk_guard": false, 00:23:28.597 "hdgst": false, 00:23:28.597 "ddgst": false, 00:23:28.597 "psk": "key0", 00:23:28.597 "allow_unrecognized_csi": false, 00:23:28.597 "method": "bdev_nvme_attach_controller", 00:23:28.597 "req_id": 1 00:23:28.597 } 00:23:28.597 Got JSON-RPC error response 00:23:28.597 response: 00:23:28.597 { 00:23:28.597 "code": -126, 00:23:28.597 "message": "Required key not available" 00:23:28.597 } 00:23:28.855 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2999532 00:23:28.855 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2999532 ']' 00:23:28.855 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2999532 00:23:28.855 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:28.855 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.855 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2999532 00:23:28.855 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:28.855 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:28.855 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2999532' 00:23:28.855 killing process with pid 2999532 00:23:28.855 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2999532 00:23:28.855 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.855 00:23:28.855 Latency(us) 00:23:28.855 [2024-11-17T01:43:37.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.855 [2024-11-17T01:43:37.315Z] =================================================================================================================== 00:23:28.855 [2024-11-17T01:43:37.315Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:28.855 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2999532 00:23:29.789 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:29.789 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:29.789 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:29.789 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:29.789 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:29.790 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2995110 00:23:29.790 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2995110 ']' 00:23:29.790 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2995110 00:23:29.790 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:29.790 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:29.790 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2995110 00:23:29.790 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:29.790 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:29.790 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2995110' 00:23:29.790 killing process with pid 2995110 00:23:29.790 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2995110 00:23:29.790 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2995110 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.B8nRj0KC6p 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.B8nRj0KC6p 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3000069 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3000069 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3000069 ']' 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:31.166 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.166 [2024-11-17 02:43:39.354186] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:31.166 [2024-11-17 02:43:39.354322] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.166 [2024-11-17 02:43:39.506205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.424 [2024-11-17 02:43:39.643235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.424 [2024-11-17 02:43:39.643315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.424 [2024-11-17 02:43:39.643341] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.424 [2024-11-17 02:43:39.643365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.424 [2024-11-17 02:43:39.643384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.424 [2024-11-17 02:43:39.644997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.991 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.991 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:31.991 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:31.991 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:31.991 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.991 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.991 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.B8nRj0KC6p 00:23:31.991 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.B8nRj0KC6p 00:23:31.991 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:32.249 [2024-11-17 02:43:40.571880] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.249 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:32.507 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:32.764 [2024-11-17 02:43:41.089270] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:32.765 [2024-11-17 02:43:41.089639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.765 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:33.023 malloc0 00:23:33.023 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:33.281 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.B8nRj0KC6p 00:23:33.538 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:33.796 02:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.B8nRj0KC6p 00:23:33.796 02:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:33.796 02:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:33.796 02:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:33.796 02:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.B8nRj0KC6p 00:23:33.796 02:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:33.796 02:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3000370 00:23:33.796 02:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:33.796 02:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:33.796 02:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3000370 /var/tmp/bdevperf.sock 00:23:33.796 02:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3000370 ']' 00:23:33.796 02:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.796 02:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.796 02:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.796 02:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.796 02:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.055 [2024-11-17 02:43:42.282260] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:34.055 [2024-11-17 02:43:42.282411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3000370 ] 00:23:34.055 [2024-11-17 02:43:42.416871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.314 [2024-11-17 02:43:42.535994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.880 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.880 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:34.880 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.B8nRj0KC6p 00:23:35.138 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:35.396 [2024-11-17 02:43:43.746780] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:35.396 TLSTESTn1 00:23:35.396 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:35.654 Running I/O for 10 seconds... 00:23:37.521 2694.00 IOPS, 10.52 MiB/s [2024-11-17T01:43:47.356Z] 2726.50 IOPS, 10.65 MiB/s [2024-11-17T01:43:48.342Z] 2732.67 IOPS, 10.67 MiB/s [2024-11-17T01:43:49.275Z] 2734.75 IOPS, 10.68 MiB/s [2024-11-17T01:43:50.206Z] 2736.00 IOPS, 10.69 MiB/s [2024-11-17T01:43:51.140Z] 2743.83 IOPS, 10.72 MiB/s [2024-11-17T01:43:52.073Z] 2743.29 IOPS, 10.72 MiB/s [2024-11-17T01:43:53.007Z] 2742.75 IOPS, 10.71 MiB/s [2024-11-17T01:43:54.381Z] 2742.89 IOPS, 10.71 MiB/s [2024-11-17T01:43:54.381Z] 2744.00 IOPS, 10.72 MiB/s 00:23:45.921 Latency(us) 00:23:45.921 [2024-11-17T01:43:54.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.921 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:45.921 Verification LBA range: start 0x0 length 0x2000 00:23:45.921 TLSTESTn1 : 10.03 2749.49 10.74 0.00 0.00 46470.49 7864.32 37282.70 00:23:45.921 [2024-11-17T01:43:54.381Z] =================================================================================================================== 00:23:45.921 [2024-11-17T01:43:54.381Z] Total : 2749.49 10.74 0.00 0.00 46470.49 7864.32 37282.70 00:23:45.921 { 00:23:45.921 "results": [ 00:23:45.921 { 00:23:45.921 "job": "TLSTESTn1", 00:23:45.921 "core_mask": "0x4", 00:23:45.921 "workload": "verify", 00:23:45.921 "status": "finished", 00:23:45.921 "verify_range": { 00:23:45.921 "start": 0, 00:23:45.921 "length": 8192 00:23:45.921 }, 00:23:45.921 "queue_depth": 128, 00:23:45.921 "io_size": 4096, 00:23:45.921 "runtime": 10.026596, 00:23:45.921 "iops": 2749.487463143025, 00:23:45.921 "mibps": 10.74018540290244, 00:23:45.921 "io_failed": 0, 00:23:45.921 "io_timeout": 0, 00:23:45.921 "avg_latency_us": 46470.485157240815, 00:23:45.921 "min_latency_us": 7864.32, 00:23:45.921 "max_latency_us": 37282.70222222222 00:23:45.921 } 00:23:45.921 ], 00:23:45.921 "core_count": 1 00:23:45.921 } 00:23:45.921 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:45.921 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3000370 00:23:45.921 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3000370 ']' 00:23:45.921 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3000370 00:23:45.921 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:45.921 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.921 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3000370 00:23:45.921 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:45.921 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:45.921 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3000370' 00:23:45.921 killing process with pid 3000370 00:23:45.921 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3000370 00:23:45.921 Received shutdown signal, test time was about 10.000000 seconds 00:23:45.921 00:23:45.921 Latency(us) 00:23:45.921 [2024-11-17T01:43:54.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.921 [2024-11-17T01:43:54.381Z] =================================================================================================================== 00:23:45.921 [2024-11-17T01:43:54.381Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:45.921 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3000370 00:23:46.488 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.B8nRj0KC6p 00:23:46.488 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.B8nRj0KC6p 00:23:46.488 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:46.489 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.B8nRj0KC6p 00:23:46.489 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:46.489 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:46.489 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:46.489 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:46.489 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.B8nRj0KC6p 00:23:46.489 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:46.489 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:46.489 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:46.489 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.B8nRj0KC6p 00:23:46.489 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:46.489 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3001848 00:23:46.489 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:46.489 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:46.489 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3001848 /var/tmp/bdevperf.sock 00:23:46.489 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3001848 ']' 00:23:46.489 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:46.489 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.489 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:46.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:46.489 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.489 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.747 [2024-11-17 02:43:54.961691] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:46.747 [2024-11-17 02:43:54.961820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3001848 ] 00:23:46.747 [2024-11-17 02:43:55.097127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.006 [2024-11-17 02:43:55.216180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:47.571 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:47.571 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:47.571 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.B8nRj0KC6p 00:23:47.830 [2024-11-17 02:43:56.173124] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.B8nRj0KC6p': 0100666 00:23:47.830 [2024-11-17 02:43:56.173177] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:47.830 request: 00:23:47.830 { 00:23:47.830 "name": "key0", 00:23:47.830 "path": "/tmp/tmp.B8nRj0KC6p", 00:23:47.830 "method": "keyring_file_add_key", 00:23:47.830 "req_id": 1 00:23:47.830 } 00:23:47.830 Got JSON-RPC error response 00:23:47.830 response: 00:23:47.830 { 00:23:47.830 "code": -1, 00:23:47.830 "message": "Operation not permitted" 00:23:47.830 } 00:23:47.830 02:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:48.087 [2024-11-17 02:43:56.458044] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:48.087 [2024-11-17 02:43:56.458180] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:48.087 request: 00:23:48.087 { 00:23:48.087 "name": "TLSTEST", 00:23:48.087 "trtype": "tcp", 00:23:48.087 "traddr": "10.0.0.2", 00:23:48.087 "adrfam": "ipv4", 00:23:48.087 "trsvcid": "4420", 00:23:48.087 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.087 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:48.087 "prchk_reftag": false, 00:23:48.087 "prchk_guard": false, 00:23:48.087 "hdgst": false, 00:23:48.087 "ddgst": false, 00:23:48.087 "psk": "key0", 00:23:48.087 "allow_unrecognized_csi": false, 00:23:48.087 "method": "bdev_nvme_attach_controller", 00:23:48.087 "req_id": 1 00:23:48.087 } 00:23:48.087 Got JSON-RPC error response 00:23:48.087 response: 00:23:48.087 { 00:23:48.087 "code": -126, 00:23:48.087 "message": "Required key not available" 00:23:48.087 } 00:23:48.087 02:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3001848 00:23:48.087 02:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3001848 ']' 00:23:48.087 02:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3001848 00:23:48.087 02:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:48.087 02:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:48.087 02:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3001848 00:23:48.087 02:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:48.087 02:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:48.087 02:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3001848' 00:23:48.087 killing process with pid 3001848 00:23:48.088 02:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3001848 00:23:48.088 Received shutdown signal, test time was about 10.000000 seconds 00:23:48.088 00:23:48.088 Latency(us) 00:23:48.088 [2024-11-17T01:43:56.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.088 [2024-11-17T01:43:56.548Z] =================================================================================================================== 00:23:48.088 [2024-11-17T01:43:56.548Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:48.088 02:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3001848 00:23:49.022 02:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:49.022 02:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:49.022 02:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:49.022 02:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:49.022 02:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:49.022 02:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3000069 00:23:49.022 02:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3000069 ']' 00:23:49.022 02:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3000069 00:23:49.022 02:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:49.022 02:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.022 02:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3000069 00:23:49.022 02:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:49.022 02:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:49.022 02:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3000069' 00:23:49.022 killing process with pid 3000069 00:23:49.022 02:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3000069 00:23:49.022 02:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3000069 00:23:50.401 02:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:50.401 02:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:50.401 02:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:50.401 02:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.401 02:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3002354 00:23:50.401 02:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:50.401 02:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3002354 00:23:50.401 02:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3002354 ']' 00:23:50.401 02:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.401 02:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.401 02:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.401 02:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.401 02:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.401 [2024-11-17 02:43:58.640964] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:50.401 [2024-11-17 02:43:58.641112] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.401 [2024-11-17 02:43:58.793618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.658 [2024-11-17 02:43:58.930640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.658 [2024-11-17 02:43:58.930748] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.658 [2024-11-17 02:43:58.930774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.658 [2024-11-17 02:43:58.930800] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.658 [2024-11-17 02:43:58.930820] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.658 [2024-11-17 02:43:58.932479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.237 02:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.237 02:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:51.237 02:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:51.237 02:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:51.237 02:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.237 02:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.237 02:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.B8nRj0KC6p 00:23:51.237 02:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:51.237 02:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.B8nRj0KC6p 00:23:51.237 02:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:51.237 02:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:51.237 02:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:51.237 02:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:51.237 02:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.B8nRj0KC6p 00:23:51.237 02:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.B8nRj0KC6p 00:23:51.237 02:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:51.496 [2024-11-17 02:43:59.876873] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.496 02:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:51.754 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:52.012 [2024-11-17 02:44:00.418372] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:52.012 [2024-11-17 02:44:00.418761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.012 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:52.270 malloc0 00:23:52.528 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:52.787 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.B8nRj0KC6p 00:23:52.787 [2024-11-17 02:44:01.238247] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.B8nRj0KC6p': 0100666 00:23:52.787 [2024-11-17 02:44:01.238306] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:52.787 request: 00:23:52.787 { 00:23:52.787 "name": "key0", 00:23:52.787 "path": "/tmp/tmp.B8nRj0KC6p", 00:23:52.787 "method": "keyring_file_add_key", 00:23:52.787 "req_id": 1 00:23:52.787 } 00:23:52.787 Got JSON-RPC error response 00:23:52.787 response: 00:23:52.787 { 00:23:52.787 "code": -1, 00:23:52.787 "message": "Operation not permitted" 00:23:52.787 } 00:23:53.046 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:53.046 [2024-11-17 02:44:01.502978] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:53.046 [2024-11-17 02:44:01.503075] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:53.304 request: 00:23:53.304 { 00:23:53.304 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.304 "host": "nqn.2016-06.io.spdk:host1", 00:23:53.304 "psk": "key0", 00:23:53.304 "method": "nvmf_subsystem_add_host", 00:23:53.304 "req_id": 1 00:23:53.304 } 00:23:53.304 Got JSON-RPC error response 00:23:53.304 response: 00:23:53.304 { 00:23:53.304 "code": -32603, 00:23:53.304 "message": "Internal error" 00:23:53.304 } 00:23:53.304 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:53.304 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:53.304 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:53.304 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:53.304 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3002354 00:23:53.304 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3002354 ']' 00:23:53.304 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3002354 00:23:53.304 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:53.304 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.304 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3002354 00:23:53.304 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:53.304 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:53.304 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3002354' 00:23:53.304 killing process with pid 3002354 00:23:53.304 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3002354 00:23:53.304 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3002354 00:23:54.678 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.B8nRj0KC6p 00:23:54.678 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:54.678 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:54.678 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:54.678 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.678 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3002794 00:23:54.678 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:54.678 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3002794 00:23:54.678 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3002794 ']' 00:23:54.678 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.678 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.678 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.678 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.678 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.678 [2024-11-17 02:44:02.887653] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:54.678 [2024-11-17 02:44:02.887817] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.678 [2024-11-17 02:44:03.054664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.937 [2024-11-17 02:44:03.191105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.937 [2024-11-17 02:44:03.191191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.937 [2024-11-17 02:44:03.191217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.937 [2024-11-17 02:44:03.191241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.937 [2024-11-17 02:44:03.191260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.937 [2024-11-17 02:44:03.192863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.504 02:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.504 02:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:55.504 02:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:55.504 02:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:55.504 02:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.504 02:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.504 02:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.B8nRj0KC6p 00:23:55.504 02:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.B8nRj0KC6p 00:23:55.504 02:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:55.761 [2024-11-17 02:44:04.132294] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.761 02:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:56.020 02:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:56.278 [2024-11-17 02:44:04.653823] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:56.278 [2024-11-17 02:44:04.654194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.278 02:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:56.536 malloc0 00:23:56.536 02:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:56.795 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.B8nRj0KC6p 00:23:57.052 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:57.619 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3003213 00:23:57.619 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:57.619 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:57.619 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3003213 /var/tmp/bdevperf.sock 00:23:57.619 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3003213 ']' 00:23:57.619 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.619 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.619 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.619 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.619 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.619 [2024-11-17 02:44:05.853629] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:57.619 [2024-11-17 02:44:05.853758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3003213 ] 00:23:57.619 [2024-11-17 02:44:05.984772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.877 [2024-11-17 02:44:06.104643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.443 02:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:58.443 02:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:58.443 02:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.B8nRj0KC6p 00:23:58.700 02:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:58.958 [2024-11-17 02:44:07.307510] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:58.958 TLSTESTn1 00:23:58.958 02:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:59.524 02:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:59.524 "subsystems": [ 00:23:59.524 { 00:23:59.524 "subsystem": "keyring", 00:23:59.524 "config": [ 00:23:59.524 { 00:23:59.524 "method": "keyring_file_add_key", 00:23:59.524 "params": { 00:23:59.524 "name": "key0", 00:23:59.524 "path": "/tmp/tmp.B8nRj0KC6p" 00:23:59.524 } 00:23:59.524 } 00:23:59.524 ] 00:23:59.524 }, 00:23:59.524 { 00:23:59.524 "subsystem": "iobuf", 00:23:59.524 "config": [ 00:23:59.524 { 00:23:59.524 "method": "iobuf_set_options", 00:23:59.524 "params": { 00:23:59.524 "small_pool_count": 8192, 00:23:59.524 "large_pool_count": 1024, 00:23:59.524 "small_bufsize": 8192, 00:23:59.524 "large_bufsize": 135168, 00:23:59.524 "enable_numa": false 00:23:59.524 } 00:23:59.524 } 00:23:59.524 ] 00:23:59.524 }, 00:23:59.524 { 00:23:59.524 "subsystem": "sock", 00:23:59.524 "config": [ 00:23:59.524 { 00:23:59.524 "method": "sock_set_default_impl", 00:23:59.524 "params": { 00:23:59.524 "impl_name": "posix" 00:23:59.524 } 00:23:59.524 }, 00:23:59.524 { 00:23:59.524 "method": "sock_impl_set_options", 00:23:59.524 "params": { 00:23:59.524 "impl_name": "ssl", 00:23:59.524 "recv_buf_size": 4096, 00:23:59.524 "send_buf_size": 4096, 00:23:59.524 "enable_recv_pipe": true, 00:23:59.524 "enable_quickack": false, 00:23:59.524 "enable_placement_id": 0, 00:23:59.524 "enable_zerocopy_send_server": true, 00:23:59.524 "enable_zerocopy_send_client": false, 00:23:59.524 "zerocopy_threshold": 0, 00:23:59.524 "tls_version": 0, 00:23:59.524 "enable_ktls": false 00:23:59.524 } 00:23:59.524 }, 00:23:59.524 { 00:23:59.524 "method": "sock_impl_set_options", 00:23:59.524 "params": { 00:23:59.524 "impl_name": "posix", 00:23:59.524 "recv_buf_size": 2097152, 00:23:59.524 "send_buf_size": 2097152, 00:23:59.524 "enable_recv_pipe": true, 00:23:59.524 "enable_quickack": false, 00:23:59.524 "enable_placement_id": 0, 00:23:59.524 "enable_zerocopy_send_server": true, 00:23:59.524 "enable_zerocopy_send_client": false, 00:23:59.524 "zerocopy_threshold": 0, 00:23:59.524 "tls_version": 0, 00:23:59.524 "enable_ktls": false 00:23:59.524 } 00:23:59.524 } 00:23:59.524 ] 00:23:59.524 }, 00:23:59.524 { 00:23:59.524 "subsystem": "vmd", 00:23:59.524 "config": [] 00:23:59.524 }, 00:23:59.524 { 00:23:59.524 "subsystem": "accel", 00:23:59.524 "config": [ 00:23:59.524 { 00:23:59.524 "method": "accel_set_options", 00:23:59.524 "params": { 00:23:59.524 "small_cache_size": 128, 00:23:59.524 "large_cache_size": 16, 00:23:59.524 "task_count": 2048, 00:23:59.524 "sequence_count": 2048, 00:23:59.524 "buf_count": 2048 00:23:59.524 } 00:23:59.524 } 00:23:59.524 ] 00:23:59.525 }, 00:23:59.525 { 00:23:59.525 "subsystem": "bdev", 00:23:59.525 "config": [ 00:23:59.525 { 00:23:59.525 "method": "bdev_set_options", 00:23:59.525 "params": { 00:23:59.525 "bdev_io_pool_size": 65535, 00:23:59.525 "bdev_io_cache_size": 256, 00:23:59.525 "bdev_auto_examine": true, 00:23:59.525 "iobuf_small_cache_size": 128, 00:23:59.525 "iobuf_large_cache_size": 16 00:23:59.525 } 00:23:59.525 }, 00:23:59.525 { 00:23:59.525 "method": "bdev_raid_set_options", 00:23:59.525 "params": { 00:23:59.525 "process_window_size_kb": 1024, 00:23:59.525 "process_max_bandwidth_mb_sec": 0 00:23:59.525 } 00:23:59.525 }, 00:23:59.525 { 00:23:59.525 "method": "bdev_iscsi_set_options", 00:23:59.525 "params": { 00:23:59.525 "timeout_sec": 30 00:23:59.525 } 00:23:59.525 }, 00:23:59.525 { 00:23:59.525 "method": "bdev_nvme_set_options", 00:23:59.525 "params": { 00:23:59.525 "action_on_timeout": "none", 00:23:59.525 "timeout_us": 0, 00:23:59.525 "timeout_admin_us": 0, 00:23:59.525 "keep_alive_timeout_ms": 10000, 00:23:59.525 "arbitration_burst": 0, 00:23:59.525 "low_priority_weight": 0, 00:23:59.525 "medium_priority_weight": 0, 00:23:59.525 "high_priority_weight": 0, 00:23:59.525 "nvme_adminq_poll_period_us": 10000, 00:23:59.525 "nvme_ioq_poll_period_us": 0, 00:23:59.525 "io_queue_requests": 0, 00:23:59.525 "delay_cmd_submit": true, 00:23:59.525 "transport_retry_count": 4, 00:23:59.525 "bdev_retry_count": 3, 00:23:59.525 "transport_ack_timeout": 0, 00:23:59.525 "ctrlr_loss_timeout_sec": 0, 00:23:59.525 "reconnect_delay_sec": 0, 00:23:59.525 "fast_io_fail_timeout_sec": 0, 00:23:59.525 "disable_auto_failback": false, 00:23:59.525 "generate_uuids": false, 00:23:59.525 "transport_tos": 0, 00:23:59.525 "nvme_error_stat": false, 00:23:59.525 "rdma_srq_size": 0, 00:23:59.525 "io_path_stat": false, 00:23:59.525 "allow_accel_sequence": false, 00:23:59.525 "rdma_max_cq_size": 0, 00:23:59.525 "rdma_cm_event_timeout_ms": 0, 00:23:59.525 "dhchap_digests": [ 00:23:59.525 "sha256", 00:23:59.525 "sha384", 00:23:59.525 "sha512" 00:23:59.525 ], 00:23:59.525 "dhchap_dhgroups": [ 00:23:59.525 "null", 00:23:59.525 "ffdhe2048", 00:23:59.525 "ffdhe3072", 00:23:59.525 "ffdhe4096", 00:23:59.525 "ffdhe6144", 00:23:59.525 "ffdhe8192" 00:23:59.525 ] 00:23:59.525 } 00:23:59.525 }, 00:23:59.525 { 00:23:59.525 "method": "bdev_nvme_set_hotplug", 00:23:59.525 "params": { 00:23:59.525 "period_us": 100000, 00:23:59.525 "enable": false 00:23:59.525 } 00:23:59.525 }, 00:23:59.525 { 00:23:59.525 "method": "bdev_malloc_create", 00:23:59.525 "params": { 00:23:59.525 "name": "malloc0", 00:23:59.525 "num_blocks": 8192, 00:23:59.525 "block_size": 4096, 00:23:59.525 "physical_block_size": 4096, 00:23:59.525 "uuid": "f85aadbd-01c6-4394-ab9f-e7123823a55c", 00:23:59.525 "optimal_io_boundary": 0, 00:23:59.525 "md_size": 0, 00:23:59.525 "dif_type": 0, 00:23:59.525 "dif_is_head_of_md": false, 00:23:59.525 "dif_pi_format": 0 00:23:59.525 } 00:23:59.525 }, 00:23:59.525 { 00:23:59.525 "method": "bdev_wait_for_examine" 00:23:59.525 } 00:23:59.525 ] 00:23:59.525 }, 00:23:59.525 { 00:23:59.525 "subsystem": "nbd", 00:23:59.525 "config": [] 00:23:59.525 }, 00:23:59.525 { 00:23:59.525 "subsystem": "scheduler", 00:23:59.525 "config": [ 00:23:59.525 { 00:23:59.525 "method": "framework_set_scheduler", 00:23:59.525 "params": { 00:23:59.525 "name": "static" 00:23:59.525 } 00:23:59.525 } 00:23:59.525 ] 00:23:59.525 }, 00:23:59.525 { 00:23:59.525 "subsystem": "nvmf", 00:23:59.525 "config": [ 00:23:59.525 { 00:23:59.525 "method": "nvmf_set_config", 00:23:59.525 "params": { 00:23:59.525 "discovery_filter": "match_any", 00:23:59.525 "admin_cmd_passthru": { 00:23:59.525 "identify_ctrlr": false 00:23:59.525 }, 00:23:59.525 "dhchap_digests": [ 00:23:59.525 "sha256", 00:23:59.525 "sha384", 00:23:59.525 "sha512" 00:23:59.525 ], 00:23:59.525 "dhchap_dhgroups": [ 00:23:59.525 "null", 00:23:59.525 "ffdhe2048", 00:23:59.525 "ffdhe3072", 00:23:59.525 "ffdhe4096", 00:23:59.525 "ffdhe6144", 00:23:59.525 "ffdhe8192" 00:23:59.525 ] 00:23:59.525 } 00:23:59.525 }, 00:23:59.525 { 00:23:59.525 "method": "nvmf_set_max_subsystems", 00:23:59.525 "params": { 00:23:59.525 "max_subsystems": 1024 00:23:59.525 } 00:23:59.525 }, 00:23:59.525 { 00:23:59.525 "method": "nvmf_set_crdt", 00:23:59.525 "params": { 00:23:59.525 "crdt1": 0, 00:23:59.525 "crdt2": 0, 00:23:59.525 "crdt3": 0 00:23:59.525 } 00:23:59.525 }, 00:23:59.525 { 00:23:59.525 "method": "nvmf_create_transport", 00:23:59.525 "params": { 00:23:59.525 "trtype": "TCP", 00:23:59.525 "max_queue_depth": 128, 00:23:59.525 "max_io_qpairs_per_ctrlr": 127, 00:23:59.525 "in_capsule_data_size": 4096, 00:23:59.525 "max_io_size": 131072, 00:23:59.525 "io_unit_size": 131072, 00:23:59.525 "max_aq_depth": 128, 00:23:59.525 "num_shared_buffers": 511, 00:23:59.525 "buf_cache_size": 4294967295, 00:23:59.525 "dif_insert_or_strip": false, 00:23:59.525 "zcopy": false, 00:23:59.525 "c2h_success": false, 00:23:59.525 "sock_priority": 0, 00:23:59.525 "abort_timeout_sec": 1, 00:23:59.525 "ack_timeout": 0, 00:23:59.525 "data_wr_pool_size": 0 00:23:59.525 } 00:23:59.525 }, 00:23:59.525 { 00:23:59.525 "method": "nvmf_create_subsystem", 00:23:59.525 "params": { 00:23:59.525 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.525 "allow_any_host": false, 00:23:59.525 "serial_number": "SPDK00000000000001", 00:23:59.525 "model_number": "SPDK bdev Controller", 00:23:59.525 "max_namespaces": 10, 00:23:59.525 "min_cntlid": 1, 00:23:59.525 "max_cntlid": 65519, 00:23:59.525 "ana_reporting": false 00:23:59.525 } 00:23:59.525 }, 00:23:59.525 { 00:23:59.525 "method": "nvmf_subsystem_add_host", 00:23:59.525 "params": { 00:23:59.525 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.525 "host": "nqn.2016-06.io.spdk:host1", 00:23:59.525 "psk": "key0" 00:23:59.525 } 00:23:59.525 }, 00:23:59.525 { 00:23:59.525 "method": "nvmf_subsystem_add_ns", 00:23:59.525 "params": { 00:23:59.525 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.525 "namespace": { 00:23:59.525 "nsid": 1, 00:23:59.525 "bdev_name": "malloc0", 00:23:59.525 "nguid": "F85AADBD01C64394AB9FE7123823A55C", 00:23:59.525 "uuid": "f85aadbd-01c6-4394-ab9f-e7123823a55c", 00:23:59.525 "no_auto_visible": false 00:23:59.525 } 00:23:59.525 } 00:23:59.525 }, 00:23:59.525 { 00:23:59.525 "method": "nvmf_subsystem_add_listener", 00:23:59.525 "params": { 00:23:59.525 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.525 "listen_address": { 00:23:59.525 "trtype": "TCP", 00:23:59.525 "adrfam": "IPv4", 00:23:59.525 "traddr": "10.0.0.2", 00:23:59.525 "trsvcid": "4420" 00:23:59.525 }, 00:23:59.525 "secure_channel": true 00:23:59.525 } 00:23:59.525 } 00:23:59.525 ] 00:23:59.525 } 00:23:59.525 ] 00:23:59.525 }' 00:23:59.525 02:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:59.784 02:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:59.784 "subsystems": [ 00:23:59.784 { 00:23:59.784 "subsystem": "keyring", 00:23:59.784 "config": [ 00:23:59.784 { 00:23:59.784 "method": "keyring_file_add_key", 00:23:59.784 "params": { 00:23:59.784 "name": "key0", 00:23:59.784 "path": "/tmp/tmp.B8nRj0KC6p" 00:23:59.784 } 00:23:59.784 } 00:23:59.784 ] 00:23:59.784 }, 00:23:59.784 { 00:23:59.784 "subsystem": "iobuf", 00:23:59.784 "config": [ 00:23:59.784 { 00:23:59.784 "method": "iobuf_set_options", 00:23:59.784 "params": { 00:23:59.784 "small_pool_count": 8192, 00:23:59.784 "large_pool_count": 1024, 00:23:59.784 "small_bufsize": 8192, 00:23:59.784 "large_bufsize": 135168, 00:23:59.784 "enable_numa": false 00:23:59.784 } 00:23:59.784 } 00:23:59.784 ] 00:23:59.784 }, 00:23:59.784 { 00:23:59.784 "subsystem": "sock", 00:23:59.784 "config": [ 00:23:59.784 { 00:23:59.784 "method": "sock_set_default_impl", 00:23:59.784 "params": { 00:23:59.784 "impl_name": "posix" 00:23:59.784 } 00:23:59.784 }, 00:23:59.784 { 00:23:59.784 "method": "sock_impl_set_options", 00:23:59.784 "params": { 00:23:59.784 "impl_name": "ssl", 00:23:59.784 "recv_buf_size": 4096, 00:23:59.784 "send_buf_size": 4096, 00:23:59.784 "enable_recv_pipe": true, 00:23:59.784 "enable_quickack": false, 00:23:59.784 "enable_placement_id": 0, 00:23:59.784 "enable_zerocopy_send_server": true, 00:23:59.784 "enable_zerocopy_send_client": false, 00:23:59.784 "zerocopy_threshold": 0, 00:23:59.784 "tls_version": 0, 00:23:59.784 "enable_ktls": false 00:23:59.784 } 00:23:59.784 }, 00:23:59.784 { 00:23:59.784 "method": "sock_impl_set_options", 00:23:59.784 "params": { 00:23:59.784 "impl_name": "posix", 00:23:59.784 "recv_buf_size": 2097152, 00:23:59.784 "send_buf_size": 2097152, 00:23:59.784 "enable_recv_pipe": true, 00:23:59.784 "enable_quickack": false, 00:23:59.784 "enable_placement_id": 0, 00:23:59.784 "enable_zerocopy_send_server": true, 00:23:59.784 "enable_zerocopy_send_client": false, 00:23:59.784 "zerocopy_threshold": 0, 00:23:59.784 "tls_version": 0, 00:23:59.784 "enable_ktls": false 00:23:59.784 } 00:23:59.784 } 00:23:59.784 ] 00:23:59.784 }, 00:23:59.784 { 00:23:59.784 "subsystem": "vmd", 00:23:59.784 "config": [] 00:23:59.784 }, 00:23:59.784 { 00:23:59.784 "subsystem": "accel", 00:23:59.784 "config": [ 00:23:59.784 { 00:23:59.784 "method": "accel_set_options", 00:23:59.784 "params": { 00:23:59.784 "small_cache_size": 128, 00:23:59.784 "large_cache_size": 16, 00:23:59.784 "task_count": 2048, 00:23:59.784 "sequence_count": 2048, 00:23:59.784 "buf_count": 2048 00:23:59.784 } 00:23:59.784 } 00:23:59.784 ] 00:23:59.784 }, 00:23:59.784 { 00:23:59.784 "subsystem": "bdev", 00:23:59.784 "config": [ 00:23:59.784 { 00:23:59.784 "method": "bdev_set_options", 00:23:59.784 "params": { 00:23:59.784 "bdev_io_pool_size": 65535, 00:23:59.784 "bdev_io_cache_size": 256, 00:23:59.784 "bdev_auto_examine": true, 00:23:59.784 "iobuf_small_cache_size": 128, 00:23:59.784 "iobuf_large_cache_size": 16 00:23:59.784 } 00:23:59.784 }, 00:23:59.784 { 00:23:59.784 "method": "bdev_raid_set_options", 00:23:59.784 "params": { 00:23:59.784 "process_window_size_kb": 1024, 00:23:59.784 "process_max_bandwidth_mb_sec": 0 00:23:59.784 } 00:23:59.784 }, 00:23:59.784 { 00:23:59.784 "method": "bdev_iscsi_set_options", 00:23:59.784 "params": { 00:23:59.784 "timeout_sec": 30 00:23:59.784 } 00:23:59.784 }, 00:23:59.784 { 00:23:59.784 "method": "bdev_nvme_set_options", 00:23:59.784 "params": { 00:23:59.784 "action_on_timeout": "none", 00:23:59.784 "timeout_us": 0, 00:23:59.784 "timeout_admin_us": 0, 00:23:59.784 "keep_alive_timeout_ms": 10000, 00:23:59.784 "arbitration_burst": 0, 00:23:59.784 "low_priority_weight": 0, 00:23:59.784 "medium_priority_weight": 0, 00:23:59.784 "high_priority_weight": 0, 00:23:59.784 "nvme_adminq_poll_period_us": 10000, 00:23:59.784 "nvme_ioq_poll_period_us": 0, 00:23:59.784 "io_queue_requests": 512, 00:23:59.784 "delay_cmd_submit": true, 00:23:59.784 "transport_retry_count": 4, 00:23:59.784 "bdev_retry_count": 3, 00:23:59.784 "transport_ack_timeout": 0, 00:23:59.784 "ctrlr_loss_timeout_sec": 0, 00:23:59.784 "reconnect_delay_sec": 0, 00:23:59.784 "fast_io_fail_timeout_sec": 0, 00:23:59.784 "disable_auto_failback": false, 00:23:59.784 "generate_uuids": false, 00:23:59.784 "transport_tos": 0, 00:23:59.784 "nvme_error_stat": false, 00:23:59.784 "rdma_srq_size": 0, 00:23:59.784 "io_path_stat": false, 00:23:59.784 "allow_accel_sequence": false, 00:23:59.785 "rdma_max_cq_size": 0, 00:23:59.785 "rdma_cm_event_timeout_ms": 0, 00:23:59.785 "dhchap_digests": [ 00:23:59.785 "sha256", 00:23:59.785 "sha384", 00:23:59.785 "sha512" 00:23:59.785 ], 00:23:59.785 "dhchap_dhgroups": [ 00:23:59.785 "null", 00:23:59.785 "ffdhe2048", 00:23:59.785 "ffdhe3072", 00:23:59.785 "ffdhe4096", 00:23:59.785 "ffdhe6144", 00:23:59.785 "ffdhe8192" 00:23:59.785 ] 00:23:59.785 } 00:23:59.785 }, 00:23:59.785 { 00:23:59.785 "method": "bdev_nvme_attach_controller", 00:23:59.785 "params": { 00:23:59.785 "name": "TLSTEST", 00:23:59.785 "trtype": "TCP", 00:23:59.785 "adrfam": "IPv4", 00:23:59.785 "traddr": "10.0.0.2", 00:23:59.785 "trsvcid": "4420", 00:23:59.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.785 "prchk_reftag": false, 00:23:59.785 "prchk_guard": false, 00:23:59.785 "ctrlr_loss_timeout_sec": 0, 00:23:59.785 "reconnect_delay_sec": 0, 00:23:59.785 "fast_io_fail_timeout_sec": 0, 00:23:59.785 "psk": "key0", 00:23:59.785 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:59.785 "hdgst": false, 00:23:59.785 "ddgst": false, 00:23:59.785 "multipath": "multipath" 00:23:59.785 } 00:23:59.785 }, 00:23:59.785 { 00:23:59.785 "method": "bdev_nvme_set_hotplug", 00:23:59.785 "params": { 00:23:59.785 "period_us": 100000, 00:23:59.785 "enable": false 00:23:59.785 } 00:23:59.785 }, 00:23:59.785 { 00:23:59.785 "method": "bdev_wait_for_examine" 00:23:59.785 } 00:23:59.785 ] 00:23:59.785 }, 00:23:59.785 { 00:23:59.785 "subsystem": "nbd", 00:23:59.785 "config": [] 00:23:59.785 } 00:23:59.785 ] 00:23:59.785 }' 00:23:59.785 02:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3003213 00:23:59.785 02:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3003213 ']' 00:23:59.785 02:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3003213 00:23:59.785 02:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:59.785 02:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.785 02:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3003213 00:23:59.785 02:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:59.785 02:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:59.785 02:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3003213' 00:23:59.785 killing process with pid 3003213 00:23:59.785 02:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3003213 00:23:59.785 Received shutdown signal, test time was about 10.000000 seconds 00:23:59.785 00:23:59.785 Latency(us) 00:23:59.785 [2024-11-17T01:44:08.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.785 [2024-11-17T01:44:08.245Z] =================================================================================================================== 00:23:59.785 [2024-11-17T01:44:08.245Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:59.785 02:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3003213 00:24:00.719 02:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3002794 00:24:00.719 02:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3002794 ']' 00:24:00.719 02:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3002794 00:24:00.719 02:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:00.719 02:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.719 02:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3002794 00:24:00.719 02:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:00.719 02:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:00.719 02:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3002794' 00:24:00.719 killing process with pid 3002794 00:24:00.719 02:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3002794 00:24:00.719 02:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3002794 00:24:02.092 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:02.092 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:02.092 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:02.092 "subsystems": [ 00:24:02.092 { 00:24:02.092 "subsystem": "keyring", 00:24:02.092 "config": [ 00:24:02.092 { 00:24:02.092 "method": "keyring_file_add_key", 00:24:02.092 "params": { 00:24:02.092 "name": "key0", 00:24:02.092 "path": "/tmp/tmp.B8nRj0KC6p" 00:24:02.092 } 00:24:02.092 } 00:24:02.092 ] 00:24:02.092 }, 00:24:02.092 { 00:24:02.092 "subsystem": "iobuf", 00:24:02.092 "config": [ 00:24:02.092 { 00:24:02.092 "method": "iobuf_set_options", 00:24:02.092 "params": { 00:24:02.092 "small_pool_count": 8192, 00:24:02.092 "large_pool_count": 1024, 00:24:02.092 "small_bufsize": 8192, 00:24:02.092 "large_bufsize": 135168, 00:24:02.092 "enable_numa": false 00:24:02.092 } 00:24:02.092 } 00:24:02.092 ] 00:24:02.092 }, 00:24:02.092 { 00:24:02.092 "subsystem": "sock", 00:24:02.092 "config": [ 00:24:02.092 { 00:24:02.092 "method": "sock_set_default_impl", 00:24:02.092 "params": { 00:24:02.092 "impl_name": "posix" 00:24:02.092 } 00:24:02.092 }, 00:24:02.092 { 00:24:02.092 "method": "sock_impl_set_options", 00:24:02.092 "params": { 00:24:02.092 "impl_name": "ssl", 00:24:02.092 "recv_buf_size": 4096, 00:24:02.092 "send_buf_size": 4096, 00:24:02.092 "enable_recv_pipe": true, 00:24:02.092 "enable_quickack": false, 00:24:02.092 "enable_placement_id": 0, 00:24:02.092 "enable_zerocopy_send_server": true, 00:24:02.092 "enable_zerocopy_send_client": false, 00:24:02.092 "zerocopy_threshold": 0, 00:24:02.092 "tls_version": 0, 00:24:02.092 "enable_ktls": false 00:24:02.092 } 00:24:02.092 }, 00:24:02.092 { 00:24:02.092 "method": "sock_impl_set_options", 00:24:02.092 "params": { 00:24:02.092 "impl_name": "posix", 00:24:02.092 "recv_buf_size": 2097152, 00:24:02.092 "send_buf_size": 2097152, 00:24:02.092 "enable_recv_pipe": true, 00:24:02.092 "enable_quickack": false, 00:24:02.092 "enable_placement_id": 0, 00:24:02.092 "enable_zerocopy_send_server": true, 00:24:02.092 "enable_zerocopy_send_client": false, 00:24:02.092 "zerocopy_threshold": 0, 00:24:02.092 "tls_version": 0, 00:24:02.092 "enable_ktls": false 00:24:02.092 } 00:24:02.092 } 00:24:02.092 ] 00:24:02.092 }, 00:24:02.092 { 00:24:02.092 "subsystem": "vmd", 00:24:02.092 "config": [] 00:24:02.092 }, 00:24:02.092 { 00:24:02.092 "subsystem": "accel", 00:24:02.092 "config": [ 00:24:02.092 { 00:24:02.092 "method": "accel_set_options", 00:24:02.092 "params": { 00:24:02.092 "small_cache_size": 128, 00:24:02.092 "large_cache_size": 16, 00:24:02.092 "task_count": 2048, 00:24:02.092 "sequence_count": 2048, 00:24:02.092 "buf_count": 2048 00:24:02.092 } 00:24:02.092 } 00:24:02.092 ] 00:24:02.092 }, 00:24:02.092 { 00:24:02.092 "subsystem": "bdev", 00:24:02.092 "config": [ 00:24:02.092 { 00:24:02.092 "method": "bdev_set_options", 00:24:02.092 "params": { 00:24:02.092 "bdev_io_pool_size": 65535, 00:24:02.092 "bdev_io_cache_size": 256, 00:24:02.092 "bdev_auto_examine": true, 00:24:02.092 "iobuf_small_cache_size": 128, 00:24:02.092 "iobuf_large_cache_size": 16 00:24:02.092 } 00:24:02.092 }, 00:24:02.092 { 00:24:02.092 "method": "bdev_raid_set_options", 00:24:02.092 "params": { 00:24:02.092 "process_window_size_kb": 1024, 00:24:02.092 "process_max_bandwidth_mb_sec": 0 00:24:02.092 } 00:24:02.092 }, 00:24:02.092 { 00:24:02.092 "method": "bdev_iscsi_set_options", 00:24:02.092 "params": { 00:24:02.092 "timeout_sec": 30 00:24:02.092 } 00:24:02.092 }, 00:24:02.092 { 00:24:02.092 "method": "bdev_nvme_set_options", 00:24:02.092 "params": { 00:24:02.092 "action_on_timeout": "none", 00:24:02.092 "timeout_us": 0, 00:24:02.092 "timeout_admin_us": 0, 00:24:02.092 "keep_alive_timeout_ms": 10000, 00:24:02.092 "arbitration_burst": 0, 00:24:02.092 "low_priority_weight": 0, 00:24:02.092 "medium_priority_weight": 0, 00:24:02.092 "high_priority_weight": 0, 00:24:02.092 "nvme_adminq_poll_period_us": 10000, 00:24:02.092 "nvme_ioq_poll_period_us": 0, 00:24:02.092 "io_queue_requests": 0, 00:24:02.092 "delay_cmd_submit": true, 00:24:02.092 "transport_retry_count": 4, 00:24:02.092 "bdev_retry_count": 3, 00:24:02.092 "transport_ack_timeout": 0, 00:24:02.092 "ctrlr_loss_timeout_sec": 0, 00:24:02.092 "reconnect_delay_sec": 0, 00:24:02.092 "fast_io_fail_timeout_sec": 0, 00:24:02.092 "disable_auto_failback": false, 00:24:02.092 "generate_uuids": false, 00:24:02.092 "transport_tos": 0, 00:24:02.092 "nvme_error_stat": false, 00:24:02.092 "rdma_srq_size": 0, 00:24:02.092 "io_path_stat": false, 00:24:02.092 "allow_accel_sequence": false, 00:24:02.092 "rdma_max_cq_size": 0, 00:24:02.092 "rdma_cm_event_timeout_ms": 0, 00:24:02.092 "dhchap_digests": [ 00:24:02.092 "sha256", 00:24:02.092 "sha384", 00:24:02.092 "sha512" 00:24:02.092 ], 00:24:02.092 "dhchap_dhgroups": [ 00:24:02.092 "null", 00:24:02.092 "ffdhe2048", 00:24:02.092 "ffdhe3072", 00:24:02.092 "ffdhe4096", 00:24:02.092 "ffdhe6144", 00:24:02.092 "ffdhe8192" 00:24:02.092 ] 00:24:02.092 } 00:24:02.092 }, 00:24:02.092 { 00:24:02.092 "method": "bdev_nvme_set_hotplug", 00:24:02.092 "params": { 00:24:02.092 "period_us": 100000, 00:24:02.092 "enable": false 00:24:02.092 } 00:24:02.092 }, 00:24:02.092 { 00:24:02.092 "method": "bdev_malloc_create", 00:24:02.092 "params": { 00:24:02.092 "name": "malloc0", 00:24:02.092 "num_blocks": 8192, 00:24:02.092 "block_size": 4096, 00:24:02.092 "physical_block_size": 4096, 00:24:02.092 "uuid": "f85aadbd-01c6-4394-ab9f-e7123823a55c", 00:24:02.092 "optimal_io_boundary": 0, 00:24:02.092 "md_size": 0, 00:24:02.092 "dif_type": 0, 00:24:02.092 "dif_is_head_of_md": false, 00:24:02.092 "dif_pi_format": 0 00:24:02.092 } 00:24:02.092 }, 00:24:02.092 { 00:24:02.092 "method": "bdev_wait_for_examine" 00:24:02.092 } 00:24:02.092 ] 00:24:02.092 }, 00:24:02.092 { 00:24:02.092 "subsystem": "nbd", 00:24:02.092 "config": [] 00:24:02.092 }, 00:24:02.092 { 00:24:02.092 "subsystem": "scheduler", 00:24:02.092 "config": [ 00:24:02.092 { 00:24:02.092 "method": "framework_set_scheduler", 00:24:02.092 "params": { 00:24:02.092 "name": "static" 00:24:02.092 } 00:24:02.092 } 00:24:02.092 ] 00:24:02.092 }, 00:24:02.092 { 00:24:02.092 "subsystem": "nvmf", 00:24:02.092 "config": [ 00:24:02.092 { 00:24:02.092 "method": "nvmf_set_config", 00:24:02.092 "params": { 00:24:02.092 "discovery_filter": "match_any", 00:24:02.092 "admin_cmd_passthru": { 00:24:02.092 "identify_ctrlr": false 00:24:02.092 }, 00:24:02.092 "dhchap_digests": [ 00:24:02.093 "sha256", 00:24:02.093 "sha384", 00:24:02.093 "sha512" 00:24:02.093 ], 00:24:02.093 "dhchap_dhgroups": [ 00:24:02.093 "null", 00:24:02.093 "ffdhe2048", 00:24:02.093 "ffdhe3072", 00:24:02.093 "ffdhe4096", 00:24:02.093 "ffdhe6144", 00:24:02.093 "ffdhe8192" 00:24:02.093 ] 00:24:02.093 } 00:24:02.093 }, 00:24:02.093 { 00:24:02.093 "method": "nvmf_set_max_subsystems", 00:24:02.093 "params": { 00:24:02.093 "max_subsystems": 1024 00:24:02.093 } 00:24:02.093 }, 00:24:02.093 { 00:24:02.093 "method": "nvmf_set_crdt", 00:24:02.093 "params": { 00:24:02.093 "crdt1": 0, 00:24:02.093 "crdt2": 0, 00:24:02.093 "crdt3": 0 00:24:02.093 } 00:24:02.093 }, 00:24:02.093 { 00:24:02.093 "method": "nvmf_create_transport", 00:24:02.093 "params": { 00:24:02.093 "trtype": "TCP", 00:24:02.093 "max_queue_depth": 128, 00:24:02.093 "max_io_qpairs_per_ctrlr": 127, 00:24:02.093 "in_capsule_data_size": 4096, 00:24:02.093 "max_io_size": 131072, 00:24:02.093 "io_unit_size": 131072, 00:24:02.093 "max_aq_depth": 128, 00:24:02.093 "num_shared_buffers": 511, 00:24:02.093 "buf_cache_size": 4294967295, 00:24:02.093 "dif_insert_or_strip": false, 00:24:02.093 "zcopy": false, 00:24:02.093 "c2h_success": false, 00:24:02.093 "sock_priority": 0, 00:24:02.093 "abort_timeout_sec": 1, 00:24:02.093 "ack_timeout": 0, 00:24:02.093 "data_wr_pool_size": 0 00:24:02.093 } 00:24:02.093 }, 00:24:02.093 { 00:24:02.093 "method": "nvmf_create_subsystem", 00:24:02.093 "params": { 00:24:02.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.093 "allow_any_host": false, 00:24:02.093 "serial_number": "SPDK00000000000001", 00:24:02.093 "model_number": "SPDK bdev Controller", 00:24:02.093 "max_namespaces": 10, 00:24:02.093 "min_cntlid": 1, 00:24:02.093 "max_cntlid": 65519, 00:24:02.093 "ana_reporting": false 00:24:02.093 } 00:24:02.093 }, 00:24:02.093 { 00:24:02.093 "method": "nvmf_subsystem_add_host", 00:24:02.093 "params": { 00:24:02.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.093 "host": "nqn.2016-06.io.spdk:host1", 00:24:02.093 "psk": "key0" 00:24:02.093 } 00:24:02.093 }, 00:24:02.093 { 00:24:02.093 "method": "nvmf_subsystem_add_ns", 00:24:02.093 "params": { 00:24:02.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.093 "namespace": { 00:24:02.093 "nsid": 1, 00:24:02.093 "bdev_name": "malloc0", 00:24:02.093 "nguid": "F85AADBD01C64394AB9FE7123823A55C", 00:24:02.093 "uuid": "f85aadbd-01c6-4394-ab9f-e7123823a55c", 00:24:02.093 "no_auto_visible": false 00:24:02.093 } 00:24:02.093 } 00:24:02.093 }, 00:24:02.093 { 00:24:02.093 "method": "nvmf_subsystem_add_listener", 00:24:02.093 "params": { 00:24:02.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.093 "listen_address": { 00:24:02.093 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:02.093 "trtype": "TCP", 00:24:02.093 "adrfam": "IPv4", 00:24:02.093 "traddr": "10.0.0.2", 00:24:02.093 "trsvcid": "4420" 00:24:02.093 }, 00:24:02.093 "secure_channel": true 00:24:02.093 } 00:24:02.093 } 00:24:02.093 ] 00:24:02.093 } 00:24:02.093 ] 00:24:02.093 }' 00:24:02.093 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.093 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3003758 00:24:02.093 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:02.093 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3003758 00:24:02.093 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3003758 ']' 00:24:02.093 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.093 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:02.093 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.093 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:02.093 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.093 [2024-11-17 02:44:10.337485] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:02.093 [2024-11-17 02:44:10.337611] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.093 [2024-11-17 02:44:10.485375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.351 [2024-11-17 02:44:10.620992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.351 [2024-11-17 02:44:10.621068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.351 [2024-11-17 02:44:10.621109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.351 [2024-11-17 02:44:10.621137] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.351 [2024-11-17 02:44:10.621157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.351 [2024-11-17 02:44:10.622846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.917 [2024-11-17 02:44:11.169499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.917 [2024-11-17 02:44:11.201525] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:02.917 [2024-11-17 02:44:11.201895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.917 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.917 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:02.917 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:02.917 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:02.917 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.917 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.917 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3003908 00:24:02.917 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3003908 /var/tmp/bdevperf.sock 00:24:02.917 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3003908 ']' 00:24:02.917 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:02.917 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:02.917 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:02.917 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:02.917 "subsystems": [ 00:24:02.917 { 00:24:02.917 "subsystem": "keyring", 00:24:02.917 "config": [ 00:24:02.917 { 00:24:02.917 "method": "keyring_file_add_key", 00:24:02.917 "params": { 00:24:02.917 "name": "key0", 00:24:02.917 "path": "/tmp/tmp.B8nRj0KC6p" 00:24:02.917 } 00:24:02.917 } 00:24:02.917 ] 00:24:02.917 }, 00:24:02.917 { 00:24:02.917 "subsystem": "iobuf", 00:24:02.917 "config": [ 00:24:02.917 { 00:24:02.917 "method": "iobuf_set_options", 00:24:02.917 "params": { 00:24:02.917 "small_pool_count": 8192, 00:24:02.918 "large_pool_count": 1024, 00:24:02.918 "small_bufsize": 8192, 00:24:02.918 "large_bufsize": 135168, 00:24:02.918 "enable_numa": false 00:24:02.918 } 00:24:02.918 } 00:24:02.918 ] 00:24:02.918 }, 00:24:02.918 { 00:24:02.918 "subsystem": "sock", 00:24:02.918 "config": [ 00:24:02.918 { 00:24:02.918 "method": "sock_set_default_impl", 00:24:02.918 "params": { 00:24:02.918 "impl_name": "posix" 00:24:02.918 } 00:24:02.918 }, 00:24:02.918 { 00:24:02.918 "method": "sock_impl_set_options", 00:24:02.918 "params": { 00:24:02.918 "impl_name": "ssl", 00:24:02.918 "recv_buf_size": 4096, 00:24:02.918 "send_buf_size": 4096, 00:24:02.918 "enable_recv_pipe": true, 00:24:02.918 "enable_quickack": false, 00:24:02.918 "enable_placement_id": 0, 00:24:02.918 "enable_zerocopy_send_server": true, 00:24:02.918 "enable_zerocopy_send_client": false, 00:24:02.918 "zerocopy_threshold": 0, 00:24:02.918 "tls_version": 0, 00:24:02.918 "enable_ktls": false 00:24:02.918 } 00:24:02.918 }, 00:24:02.918 { 00:24:02.918 "method": "sock_impl_set_options", 00:24:02.918 "params": { 00:24:02.918 "impl_name": "posix", 00:24:02.918 "recv_buf_size": 2097152, 00:24:02.918 "send_buf_size": 2097152, 00:24:02.918 "enable_recv_pipe": true, 00:24:02.918 "enable_quickack": false, 00:24:02.918 "enable_placement_id": 0, 00:24:02.918 "enable_zerocopy_send_server": true, 00:24:02.918 "enable_zerocopy_send_client": false, 00:24:02.918 "zerocopy_threshold": 0, 00:24:02.918 "tls_version": 0, 00:24:02.918 "enable_ktls": false 00:24:02.918 } 00:24:02.918 } 00:24:02.918 ] 00:24:02.918 }, 00:24:02.918 { 00:24:02.918 "subsystem": "vmd", 00:24:02.918 "config": [] 00:24:02.918 }, 00:24:02.918 { 00:24:02.918 "subsystem": "accel", 00:24:02.918 "config": [ 00:24:02.918 { 00:24:02.918 "method": "accel_set_options", 00:24:02.918 "params": { 00:24:02.918 "small_cache_size": 128, 00:24:02.918 "large_cache_size": 16, 00:24:02.918 "task_count": 2048, 00:24:02.918 "sequence_count": 2048, 00:24:02.918 "buf_count": 2048 00:24:02.918 } 00:24:02.918 } 00:24:02.918 ] 00:24:02.918 }, 00:24:02.918 { 00:24:02.918 "subsystem": "bdev", 00:24:02.918 "config": [ 00:24:02.918 { 00:24:02.918 "method": "bdev_set_options", 00:24:02.918 "params": { 00:24:02.918 "bdev_io_pool_size": 65535, 00:24:02.918 "bdev_io_cache_size": 256, 00:24:02.918 "bdev_auto_examine": true, 00:24:02.918 "iobuf_small_cache_size": 128, 00:24:02.918 "iobuf_large_cache_size": 16 00:24:02.918 } 00:24:02.918 }, 00:24:02.918 { 00:24:02.918 "method": "bdev_raid_set_options", 00:24:02.918 "params": { 00:24:02.918 "process_window_size_kb": 1024, 00:24:02.918 "process_max_bandwidth_mb_sec": 0 00:24:02.918 } 00:24:02.918 }, 00:24:02.918 { 00:24:02.918 "method": "bdev_iscsi_set_options", 00:24:02.918 "params": { 00:24:02.918 "timeout_sec": 30 00:24:02.918 } 00:24:02.918 }, 00:24:02.918 { 00:24:02.918 "method": "bdev_nvme_set_options", 00:24:02.918 "params": { 00:24:02.918 "action_on_timeout": "none", 00:24:02.918 "timeout_us": 0, 00:24:02.918 "timeout_admin_us": 0, 00:24:02.918 "keep_alive_timeout_ms": 10000, 00:24:02.918 "arbitration_burst": 0, 00:24:02.918 "low_priority_weight": 0, 00:24:02.918 "medium_priority_weight": 0, 00:24:02.918 "high_priority_weight": 0, 00:24:02.918 "nvme_adminq_poll_period_us": 10000, 00:24:02.918 "nvme_ioq_poll_period_us": 0, 00:24:02.918 "io_queue_requests": 512, 00:24:02.918 "delay_cmd_submit": true, 00:24:02.918 "transport_retry_count": 4, 00:24:02.918 "bdev_retry_count": 3, 00:24:02.918 "transport_ack_timeout": 0, 00:24:02.918 "ctrlr_loss_timeout_sec": 0, 00:24:02.918 "reconnect_delay_sec": 0, 00:24:02.918 "fast_io_fail_timeout_sec": 0, 00:24:02.918 "disable_auto_failback": false, 00:24:02.918 "generate_uuids": false, 00:24:02.918 "transport_tos": 0, 00:24:02.918 "nvme_error_stat": false, 00:24:02.918 "rdma_srq_size": 0, 00:24:02.918 "io_path_stat": false, 00:24:02.918 "allow_accel_sequence": false, 00:24:02.918 "rdma_max_cq_size": 0, 00:24:02.918 "rdma_cm_event_timeout_ms": 0, 00:24:02.918 "dhchap_digests": [ 00:24:02.918 "sha256", 00:24:02.918 "sha384", 00:24:02.918 "sha512" 00:24:02.918 ], 00:24:02.918 "dhchap_dhgroups": [ 00:24:02.918 "null", 00:24:02.918 "ffdhe2048", 00:24:02.918 "ffdhe3072", 00:24:02.918 "ffdhe4096", 00:24:02.918 "ffdhe6144", 00:24:02.918 "ffdhe8192" 00:24:02.918 ] 00:24:02.918 } 00:24:02.918 }, 00:24:02.918 { 00:24:02.918 "method": "bdev_nvme_attach_controller", 00:24:02.918 "params": { 00:24:02.918 "name": "TLSTEST", 00:24:02.918 "trtype": "TCP", 00:24:02.918 "adrfam": "IPv4", 00:24:02.918 "traddr": "10.0.0.2", 00:24:02.918 "trsvcid": "4420", 00:24:02.918 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.918 "prchk_reftag": false, 00:24:02.918 "prchk_guard": false, 00:24:02.918 "ctrlr_loss_timeout_sec": 0, 00:24:02.918 "reconnect_delay_sec": 0, 00:24:02.918 "fast_io_fail_timeout_sec": 0, 00:24:02.918 "psk": "key0", 00:24:02.918 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:02.918 "hdgst": false, 00:24:02.918 "ddgst": false, 00:24:02.918 "multipath": "multipath" 00:24:02.918 } 00:24:02.918 }, 00:24:02.918 { 00:24:02.918 "method": "bdev_nvme_set_hotplug", 00:24:02.918 "params": { 00:24:02.918 "period_us": 100000, 00:24:02.918 "enable": false 00:24:02.918 } 00:24:02.918 }, 00:24:02.918 { 00:24:02.918 "method": "bdev_wait_for_examine" 00:24:02.918 } 00:24:02.918 ] 00:24:02.918 }, 00:24:02.918 { 00:24:02.918 "subsystem": "nbd", 00:24:02.918 "config": [] 00:24:02.918 } 00:24:02.918 ] 00:24:02.918 }' 00:24:02.918 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:02.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:02.918 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:02.918 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.177 [2024-11-17 02:44:11.436809] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:03.177 [2024-11-17 02:44:11.436939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3003908 ] 00:24:03.177 [2024-11-17 02:44:11.569302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.434 [2024-11-17 02:44:11.690836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.692 [2024-11-17 02:44:12.096664] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:03.950 02:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.950 02:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:03.950 02:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:04.207 Running I/O for 10 seconds... 00:24:06.075 2683.00 IOPS, 10.48 MiB/s [2024-11-17T01:44:15.552Z] 2688.00 IOPS, 10.50 MiB/s [2024-11-17T01:44:16.924Z] 2698.33 IOPS, 10.54 MiB/s [2024-11-17T01:44:17.858Z] 2699.75 IOPS, 10.55 MiB/s [2024-11-17T01:44:18.791Z] 2707.80 IOPS, 10.58 MiB/s [2024-11-17T01:44:19.723Z] 2706.67 IOPS, 10.57 MiB/s [2024-11-17T01:44:20.656Z] 2707.71 IOPS, 10.58 MiB/s [2024-11-17T01:44:21.590Z] 2709.25 IOPS, 10.58 MiB/s [2024-11-17T01:44:22.524Z] 2716.44 IOPS, 10.61 MiB/s [2024-11-17T01:44:22.782Z] 2719.10 IOPS, 10.62 MiB/s 00:24:14.322 Latency(us) 00:24:14.322 [2024-11-17T01:44:22.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.322 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:14.322 Verification LBA range: start 0x0 length 0x2000 00:24:14.322 TLSTESTn1 : 10.03 2722.91 10.64 0.00 0.00 46907.52 8932.31 38253.61 00:24:14.322 [2024-11-17T01:44:22.782Z] =================================================================================================================== 00:24:14.322 [2024-11-17T01:44:22.782Z] Total : 2722.91 10.64 0.00 0.00 46907.52 8932.31 38253.61 00:24:14.322 { 00:24:14.322 "results": [ 00:24:14.322 { 00:24:14.322 "job": "TLSTESTn1", 00:24:14.322 "core_mask": "0x4", 00:24:14.322 "workload": "verify", 00:24:14.322 "status": "finished", 00:24:14.322 "verify_range": { 00:24:14.322 "start": 0, 00:24:14.322 "length": 8192 00:24:14.322 }, 00:24:14.322 "queue_depth": 128, 00:24:14.322 "io_size": 4096, 00:24:14.322 "runtime": 10.033003, 00:24:14.322 "iops": 2722.913568350373, 00:24:14.322 "mibps": 10.636381126368645, 00:24:14.322 "io_failed": 0, 00:24:14.322 "io_timeout": 0, 00:24:14.322 "avg_latency_us": 46907.516854719215, 00:24:14.322 "min_latency_us": 8932.314074074075, 00:24:14.322 "max_latency_us": 38253.60592592593 00:24:14.322 } 00:24:14.322 ], 00:24:14.322 "core_count": 1 00:24:14.322 } 00:24:14.323 02:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:14.323 02:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3003908 00:24:14.323 02:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3003908 ']' 00:24:14.323 02:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3003908 00:24:14.323 02:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:14.323 02:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:14.323 02:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3003908 00:24:14.323 02:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:14.323 02:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:14.323 02:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3003908' 00:24:14.323 killing process with pid 3003908 00:24:14.323 02:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3003908 00:24:14.323 Received shutdown signal, test time was about 10.000000 seconds 00:24:14.323 00:24:14.323 Latency(us) 00:24:14.323 [2024-11-17T01:44:22.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.323 [2024-11-17T01:44:22.783Z] =================================================================================================================== 00:24:14.323 [2024-11-17T01:44:22.783Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:14.323 02:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3003908 00:24:15.257 02:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3003758 00:24:15.257 02:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3003758 ']' 00:24:15.257 02:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3003758 00:24:15.257 02:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:15.257 02:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:15.257 02:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3003758 00:24:15.257 02:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:15.257 02:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:15.257 02:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3003758' 00:24:15.257 killing process with pid 3003758 00:24:15.257 02:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3003758 00:24:15.257 02:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3003758 00:24:16.631 02:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:16.631 02:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:16.631 02:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:16.631 02:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.632 02:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3005487 00:24:16.632 02:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:16.632 02:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3005487 00:24:16.632 02:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3005487 ']' 00:24:16.632 02:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.632 02:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:16.632 02:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.632 02:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:16.632 02:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.632 [2024-11-17 02:44:24.880337] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:16.632 [2024-11-17 02:44:24.880481] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.632 [2024-11-17 02:44:25.032606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.890 [2024-11-17 02:44:25.169172] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.890 [2024-11-17 02:44:25.169265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.890 [2024-11-17 02:44:25.169290] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.890 [2024-11-17 02:44:25.169315] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.890 [2024-11-17 02:44:25.169334] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.890 [2024-11-17 02:44:25.170993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.457 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.457 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:17.457 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:17.457 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:17.457 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.457 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.457 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.B8nRj0KC6p 00:24:17.457 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.B8nRj0KC6p 00:24:17.457 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:17.715 [2024-11-17 02:44:26.119319] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.715 02:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:17.974 02:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:18.232 [2024-11-17 02:44:26.652820] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:18.232 [2024-11-17 02:44:26.653211] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.232 02:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:18.798 malloc0 00:24:18.798 02:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:19.056 02:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.B8nRj0KC6p 00:24:19.314 02:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:19.572 02:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3005796 00:24:19.572 02:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:19.572 02:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:19.572 02:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3005796 /var/tmp/bdevperf.sock 00:24:19.572 02:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3005796 ']' 00:24:19.572 02:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.572 02:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:19.572 02:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.572 02:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:19.572 02:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.572 [2024-11-17 02:44:27.974657] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:19.572 [2024-11-17 02:44:27.974805] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3005796 ] 00:24:19.830 [2024-11-17 02:44:28.118560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.830 [2024-11-17 02:44:28.246672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.765 02:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.765 02:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:20.765 02:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.B8nRj0KC6p 00:24:20.765 02:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:21.022 [2024-11-17 02:44:29.463636] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:21.280 nvme0n1 00:24:21.280 02:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:21.280 Running I/O for 1 seconds... 00:24:22.653 2506.00 IOPS, 9.79 MiB/s 00:24:22.653 Latency(us) 00:24:22.653 [2024-11-17T01:44:31.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.653 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:22.653 Verification LBA range: start 0x0 length 0x2000 00:24:22.653 nvme0n1 : 1.03 2556.56 9.99 0.00 0.00 49420.32 8932.31 50875.35 00:24:22.653 [2024-11-17T01:44:31.113Z] =================================================================================================================== 00:24:22.653 [2024-11-17T01:44:31.113Z] Total : 2556.56 9.99 0.00 0.00 49420.32 8932.31 50875.35 00:24:22.653 { 00:24:22.653 "results": [ 00:24:22.653 { 00:24:22.653 "job": "nvme0n1", 00:24:22.653 "core_mask": "0x2", 00:24:22.653 "workload": "verify", 00:24:22.653 "status": "finished", 00:24:22.653 "verify_range": { 00:24:22.653 "start": 0, 00:24:22.653 "length": 8192 00:24:22.653 }, 00:24:22.653 "queue_depth": 128, 00:24:22.653 "io_size": 4096, 00:24:22.653 "runtime": 1.03068, 00:24:22.653 "iops": 2556.56459812939, 00:24:22.653 "mibps": 9.98658046144293, 00:24:22.653 "io_failed": 0, 00:24:22.653 "io_timeout": 0, 00:24:22.653 "avg_latency_us": 49420.31577876169, 00:24:22.653 "min_latency_us": 8932.314074074075, 00:24:22.653 "max_latency_us": 50875.35407407407 00:24:22.653 } 00:24:22.653 ], 00:24:22.653 "core_count": 1 00:24:22.653 } 00:24:22.653 02:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3005796 00:24:22.653 02:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3005796 ']' 00:24:22.653 02:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3005796 00:24:22.653 02:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:22.653 02:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.653 02:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3005796 00:24:22.653 02:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:22.653 02:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:22.653 02:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3005796' 00:24:22.653 killing process with pid 3005796 00:24:22.653 02:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3005796 00:24:22.653 Received shutdown signal, test time was about 1.000000 seconds 00:24:22.653 00:24:22.653 Latency(us) 00:24:22.653 [2024-11-17T01:44:31.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.653 [2024-11-17T01:44:31.113Z] =================================================================================================================== 00:24:22.653 [2024-11-17T01:44:31.113Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:22.653 02:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3005796 00:24:23.219 02:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3005487 00:24:23.219 02:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3005487 ']' 00:24:23.219 02:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3005487 00:24:23.219 02:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:23.219 02:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.220 02:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3005487 00:24:23.220 02:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:23.220 02:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:23.220 02:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3005487' 00:24:23.220 killing process with pid 3005487 00:24:23.220 02:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3005487 00:24:23.220 02:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3005487 00:24:24.594 02:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:24.594 02:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:24.594 02:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:24.594 02:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.594 02:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3006450 00:24:24.594 02:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:24.594 02:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3006450 00:24:24.594 02:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3006450 ']' 00:24:24.594 02:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.594 02:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.594 02:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.594 02:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.594 02:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.594 [2024-11-17 02:44:32.986968] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:24.594 [2024-11-17 02:44:32.987119] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.852 [2024-11-17 02:44:33.137687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.852 [2024-11-17 02:44:33.256681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.852 [2024-11-17 02:44:33.256787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.852 [2024-11-17 02:44:33.256809] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.852 [2024-11-17 02:44:33.256830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.852 [2024-11-17 02:44:33.256847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.852 [2024-11-17 02:44:33.258502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.787 02:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:25.787 02:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:25.787 02:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:25.787 02:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:25.787 02:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.787 02:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.787 02:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:25.787 02:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.787 02:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.787 [2024-11-17 02:44:33.973838] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.787 malloc0 00:24:25.787 [2024-11-17 02:44:34.036588] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:25.787 [2024-11-17 02:44:34.037017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.787 02:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.787 02:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3006608 00:24:25.787 02:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:25.787 02:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3006608 /var/tmp/bdevperf.sock 00:24:25.787 02:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3006608 ']' 00:24:25.787 02:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:25.787 02:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:25.787 02:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:25.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:25.787 02:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:25.787 02:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.787 [2024-11-17 02:44:34.147219] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:25.787 [2024-11-17 02:44:34.147370] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3006608 ] 00:24:26.046 [2024-11-17 02:44:34.291251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.046 [2024-11-17 02:44:34.426848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.980 02:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.980 02:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:26.980 02:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.B8nRj0KC6p 00:24:26.980 02:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:27.238 [2024-11-17 02:44:35.653934] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:27.495 nvme0n1 00:24:27.495 02:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:27.495 Running I/O for 1 seconds... 00:24:28.687 2603.00 IOPS, 10.17 MiB/s 00:24:28.687 Latency(us) 00:24:28.687 [2024-11-17T01:44:37.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.687 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:28.687 Verification LBA range: start 0x0 length 0x2000 00:24:28.687 nvme0n1 : 1.04 2630.85 10.28 0.00 0.00 47903.30 10145.94 48351.00 00:24:28.687 [2024-11-17T01:44:37.147Z] =================================================================================================================== 00:24:28.687 [2024-11-17T01:44:37.147Z] Total : 2630.85 10.28 0.00 0.00 47903.30 10145.94 48351.00 00:24:28.687 { 00:24:28.687 "results": [ 00:24:28.687 { 00:24:28.687 "job": "nvme0n1", 00:24:28.687 "core_mask": "0x2", 00:24:28.687 "workload": "verify", 00:24:28.687 "status": "finished", 00:24:28.687 "verify_range": { 00:24:28.687 "start": 0, 00:24:28.687 "length": 8192 00:24:28.687 }, 00:24:28.687 "queue_depth": 128, 00:24:28.687 "io_size": 4096, 00:24:28.687 "runtime": 1.038067, 00:24:28.687 "iops": 2630.8513804985614, 00:24:28.687 "mibps": 10.276763205072506, 00:24:28.687 "io_failed": 0, 00:24:28.687 "io_timeout": 0, 00:24:28.687 "avg_latency_us": 47903.29909597625, 00:24:28.687 "min_latency_us": 10145.943703703704, 00:24:28.687 "max_latency_us": 48351.00444444444 00:24:28.687 } 00:24:28.687 ], 00:24:28.687 "core_count": 1 00:24:28.687 } 00:24:28.687 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:28.687 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.687 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.687 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.687 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:28.687 "subsystems": [ 00:24:28.687 { 00:24:28.687 "subsystem": "keyring", 00:24:28.687 "config": [ 00:24:28.687 { 00:24:28.687 "method": "keyring_file_add_key", 00:24:28.687 "params": { 00:24:28.687 "name": "key0", 00:24:28.687 "path": "/tmp/tmp.B8nRj0KC6p" 00:24:28.687 } 00:24:28.687 } 00:24:28.687 ] 00:24:28.687 }, 00:24:28.687 { 00:24:28.687 "subsystem": "iobuf", 00:24:28.687 "config": [ 00:24:28.687 { 00:24:28.687 "method": "iobuf_set_options", 00:24:28.687 "params": { 00:24:28.687 "small_pool_count": 8192, 00:24:28.687 "large_pool_count": 1024, 00:24:28.687 "small_bufsize": 8192, 00:24:28.687 "large_bufsize": 135168, 00:24:28.687 "enable_numa": false 00:24:28.687 } 00:24:28.687 } 00:24:28.687 ] 00:24:28.687 }, 00:24:28.687 { 00:24:28.687 "subsystem": "sock", 00:24:28.687 "config": [ 00:24:28.687 { 00:24:28.687 "method": "sock_set_default_impl", 00:24:28.687 "params": { 00:24:28.687 "impl_name": "posix" 00:24:28.687 } 00:24:28.687 }, 00:24:28.687 { 00:24:28.687 "method": "sock_impl_set_options", 00:24:28.687 "params": { 00:24:28.687 "impl_name": "ssl", 00:24:28.687 "recv_buf_size": 4096, 00:24:28.687 "send_buf_size": 4096, 00:24:28.687 "enable_recv_pipe": true, 00:24:28.687 "enable_quickack": false, 00:24:28.687 "enable_placement_id": 0, 00:24:28.687 "enable_zerocopy_send_server": true, 00:24:28.687 "enable_zerocopy_send_client": false, 00:24:28.687 "zerocopy_threshold": 0, 00:24:28.687 "tls_version": 0, 00:24:28.687 "enable_ktls": false 00:24:28.687 } 00:24:28.687 }, 00:24:28.687 { 00:24:28.687 "method": "sock_impl_set_options", 00:24:28.687 "params": { 00:24:28.687 "impl_name": "posix", 00:24:28.687 "recv_buf_size": 2097152, 00:24:28.687 "send_buf_size": 2097152, 00:24:28.687 "enable_recv_pipe": true, 00:24:28.687 "enable_quickack": false, 00:24:28.687 "enable_placement_id": 0, 00:24:28.687 "enable_zerocopy_send_server": true, 00:24:28.687 "enable_zerocopy_send_client": false, 00:24:28.687 "zerocopy_threshold": 0, 00:24:28.687 "tls_version": 0, 00:24:28.687 "enable_ktls": false 00:24:28.687 } 00:24:28.687 } 00:24:28.687 ] 00:24:28.687 }, 00:24:28.687 { 00:24:28.687 "subsystem": "vmd", 00:24:28.687 "config": [] 00:24:28.687 }, 00:24:28.687 { 00:24:28.687 "subsystem": "accel", 00:24:28.687 "config": [ 00:24:28.687 { 00:24:28.687 "method": "accel_set_options", 00:24:28.687 "params": { 00:24:28.687 "small_cache_size": 128, 00:24:28.687 "large_cache_size": 16, 00:24:28.687 "task_count": 2048, 00:24:28.687 "sequence_count": 2048, 00:24:28.687 "buf_count": 2048 00:24:28.687 } 00:24:28.687 } 00:24:28.687 ] 00:24:28.687 }, 00:24:28.687 { 00:24:28.687 "subsystem": "bdev", 00:24:28.687 "config": [ 00:24:28.687 { 00:24:28.687 "method": "bdev_set_options", 00:24:28.687 "params": { 00:24:28.687 "bdev_io_pool_size": 65535, 00:24:28.688 "bdev_io_cache_size": 256, 00:24:28.688 "bdev_auto_examine": true, 00:24:28.688 "iobuf_small_cache_size": 128, 00:24:28.688 "iobuf_large_cache_size": 16 00:24:28.688 } 00:24:28.688 }, 00:24:28.688 { 00:24:28.688 "method": "bdev_raid_set_options", 00:24:28.688 "params": { 00:24:28.688 "process_window_size_kb": 1024, 00:24:28.688 "process_max_bandwidth_mb_sec": 0 00:24:28.688 } 00:24:28.688 }, 00:24:28.688 { 00:24:28.688 "method": "bdev_iscsi_set_options", 00:24:28.688 "params": { 00:24:28.688 "timeout_sec": 30 00:24:28.688 } 00:24:28.688 }, 00:24:28.688 { 00:24:28.688 "method": "bdev_nvme_set_options", 00:24:28.688 "params": { 00:24:28.688 "action_on_timeout": "none", 00:24:28.688 "timeout_us": 0, 00:24:28.688 "timeout_admin_us": 0, 00:24:28.688 "keep_alive_timeout_ms": 10000, 00:24:28.688 "arbitration_burst": 0, 00:24:28.688 "low_priority_weight": 0, 00:24:28.688 "medium_priority_weight": 0, 00:24:28.688 "high_priority_weight": 0, 00:24:28.688 "nvme_adminq_poll_period_us": 10000, 00:24:28.688 "nvme_ioq_poll_period_us": 0, 00:24:28.688 "io_queue_requests": 0, 00:24:28.688 "delay_cmd_submit": true, 00:24:28.688 "transport_retry_count": 4, 00:24:28.688 "bdev_retry_count": 3, 00:24:28.688 "transport_ack_timeout": 0, 00:24:28.688 "ctrlr_loss_timeout_sec": 0, 00:24:28.688 "reconnect_delay_sec": 0, 00:24:28.688 "fast_io_fail_timeout_sec": 0, 00:24:28.688 "disable_auto_failback": false, 00:24:28.688 "generate_uuids": false, 00:24:28.688 "transport_tos": 0, 00:24:28.688 "nvme_error_stat": false, 00:24:28.688 "rdma_srq_size": 0, 00:24:28.688 "io_path_stat": false, 00:24:28.688 "allow_accel_sequence": false, 00:24:28.688 "rdma_max_cq_size": 0, 00:24:28.688 "rdma_cm_event_timeout_ms": 0, 00:24:28.688 "dhchap_digests": [ 00:24:28.688 "sha256", 00:24:28.688 "sha384", 00:24:28.688 "sha512" 00:24:28.688 ], 00:24:28.688 "dhchap_dhgroups": [ 00:24:28.688 "null", 00:24:28.688 "ffdhe2048", 00:24:28.688 "ffdhe3072", 00:24:28.688 "ffdhe4096", 00:24:28.688 "ffdhe6144", 00:24:28.688 "ffdhe8192" 00:24:28.688 ] 00:24:28.688 } 00:24:28.688 }, 00:24:28.688 { 00:24:28.688 "method": "bdev_nvme_set_hotplug", 00:24:28.688 "params": { 00:24:28.688 "period_us": 100000, 00:24:28.688 "enable": false 00:24:28.688 } 00:24:28.688 }, 00:24:28.688 { 00:24:28.688 "method": "bdev_malloc_create", 00:24:28.688 "params": { 00:24:28.688 "name": "malloc0", 00:24:28.688 "num_blocks": 8192, 00:24:28.688 "block_size": 4096, 00:24:28.688 "physical_block_size": 4096, 00:24:28.688 "uuid": "4e242b87-60f8-40db-ade9-1ca0207d61db", 00:24:28.688 "optimal_io_boundary": 0, 00:24:28.688 "md_size": 0, 00:24:28.688 "dif_type": 0, 00:24:28.688 "dif_is_head_of_md": false, 00:24:28.688 "dif_pi_format": 0 00:24:28.688 } 00:24:28.688 }, 00:24:28.688 { 00:24:28.688 "method": "bdev_wait_for_examine" 00:24:28.688 } 00:24:28.688 ] 00:24:28.688 }, 00:24:28.688 { 00:24:28.688 "subsystem": "nbd", 00:24:28.688 "config": [] 00:24:28.688 }, 00:24:28.688 { 00:24:28.688 "subsystem": "scheduler", 00:24:28.688 "config": [ 00:24:28.688 { 00:24:28.688 "method": "framework_set_scheduler", 00:24:28.688 "params": { 00:24:28.688 "name": "static" 00:24:28.688 } 00:24:28.688 } 00:24:28.688 ] 00:24:28.688 }, 00:24:28.688 { 00:24:28.688 "subsystem": "nvmf", 00:24:28.688 "config": [ 00:24:28.688 { 00:24:28.688 "method": "nvmf_set_config", 00:24:28.688 "params": { 00:24:28.688 "discovery_filter": "match_any", 00:24:28.688 "admin_cmd_passthru": { 00:24:28.688 "identify_ctrlr": false 00:24:28.688 }, 00:24:28.688 "dhchap_digests": [ 00:24:28.688 "sha256", 00:24:28.688 "sha384", 00:24:28.688 "sha512" 00:24:28.688 ], 00:24:28.688 "dhchap_dhgroups": [ 00:24:28.688 "null", 00:24:28.688 "ffdhe2048", 00:24:28.688 "ffdhe3072", 00:24:28.688 "ffdhe4096", 00:24:28.688 "ffdhe6144", 00:24:28.688 "ffdhe8192" 00:24:28.688 ] 00:24:28.688 } 00:24:28.688 }, 00:24:28.688 { 00:24:28.688 "method": "nvmf_set_max_subsystems", 00:24:28.688 "params": { 00:24:28.688 "max_subsystems": 1024 00:24:28.688 } 00:24:28.688 }, 00:24:28.688 { 00:24:28.688 "method": "nvmf_set_crdt", 00:24:28.688 "params": { 00:24:28.688 "crdt1": 0, 00:24:28.688 "crdt2": 0, 00:24:28.688 "crdt3": 0 00:24:28.688 } 00:24:28.688 }, 00:24:28.688 { 00:24:28.688 "method": "nvmf_create_transport", 00:24:28.688 "params": { 00:24:28.688 "trtype": "TCP", 00:24:28.688 "max_queue_depth": 128, 00:24:28.688 "max_io_qpairs_per_ctrlr": 127, 00:24:28.688 "in_capsule_data_size": 4096, 00:24:28.688 "max_io_size": 131072, 00:24:28.688 "io_unit_size": 131072, 00:24:28.688 "max_aq_depth": 128, 00:24:28.688 "num_shared_buffers": 511, 00:24:28.688 "buf_cache_size": 4294967295, 00:24:28.688 "dif_insert_or_strip": false, 00:24:28.688 "zcopy": false, 00:24:28.688 "c2h_success": false, 00:24:28.688 "sock_priority": 0, 00:24:28.688 "abort_timeout_sec": 1, 00:24:28.688 "ack_timeout": 0, 00:24:28.688 "data_wr_pool_size": 0 00:24:28.688 } 00:24:28.688 }, 00:24:28.688 { 00:24:28.688 "method": "nvmf_create_subsystem", 00:24:28.688 "params": { 00:24:28.688 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.688 "allow_any_host": false, 00:24:28.688 "serial_number": "00000000000000000000", 00:24:28.688 "model_number": "SPDK bdev Controller", 00:24:28.688 "max_namespaces": 32, 00:24:28.688 "min_cntlid": 1, 00:24:28.688 "max_cntlid": 65519, 00:24:28.688 "ana_reporting": false 00:24:28.688 } 00:24:28.688 }, 00:24:28.688 { 00:24:28.688 "method": "nvmf_subsystem_add_host", 00:24:28.688 "params": { 00:24:28.688 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.688 "host": "nqn.2016-06.io.spdk:host1", 00:24:28.688 "psk": "key0" 00:24:28.688 } 00:24:28.688 }, 00:24:28.688 { 00:24:28.688 "method": "nvmf_subsystem_add_ns", 00:24:28.688 "params": { 00:24:28.688 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.688 "namespace": { 00:24:28.688 "nsid": 1, 00:24:28.688 "bdev_name": "malloc0", 00:24:28.688 "nguid": "4E242B8760F840DBADE91CA0207D61DB", 00:24:28.688 "uuid": "4e242b87-60f8-40db-ade9-1ca0207d61db", 00:24:28.688 "no_auto_visible": false 00:24:28.688 } 00:24:28.688 } 00:24:28.688 }, 00:24:28.688 { 00:24:28.688 "method": "nvmf_subsystem_add_listener", 00:24:28.688 "params": { 00:24:28.688 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.688 "listen_address": { 00:24:28.688 "trtype": "TCP", 00:24:28.688 "adrfam": "IPv4", 00:24:28.688 "traddr": "10.0.0.2", 00:24:28.688 "trsvcid": "4420" 00:24:28.688 }, 00:24:28.688 "secure_channel": false, 00:24:28.688 "sock_impl": "ssl" 00:24:28.688 } 00:24:28.688 } 00:24:28.688 ] 00:24:28.688 } 00:24:28.688 ] 00:24:28.688 }' 00:24:28.688 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:29.254 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:29.254 "subsystems": [ 00:24:29.254 { 00:24:29.254 "subsystem": "keyring", 00:24:29.254 "config": [ 00:24:29.254 { 00:24:29.254 "method": "keyring_file_add_key", 00:24:29.254 "params": { 00:24:29.254 "name": "key0", 00:24:29.254 "path": "/tmp/tmp.B8nRj0KC6p" 00:24:29.254 } 00:24:29.254 } 00:24:29.254 ] 00:24:29.254 }, 00:24:29.254 { 00:24:29.254 "subsystem": "iobuf", 00:24:29.254 "config": [ 00:24:29.254 { 00:24:29.254 "method": "iobuf_set_options", 00:24:29.254 "params": { 00:24:29.254 "small_pool_count": 8192, 00:24:29.254 "large_pool_count": 1024, 00:24:29.254 "small_bufsize": 8192, 00:24:29.254 "large_bufsize": 135168, 00:24:29.254 "enable_numa": false 00:24:29.254 } 00:24:29.254 } 00:24:29.254 ] 00:24:29.254 }, 00:24:29.254 { 00:24:29.254 "subsystem": "sock", 00:24:29.254 "config": [ 00:24:29.254 { 00:24:29.254 "method": "sock_set_default_impl", 00:24:29.254 "params": { 00:24:29.254 "impl_name": "posix" 00:24:29.254 } 00:24:29.254 }, 00:24:29.254 { 00:24:29.254 "method": "sock_impl_set_options", 00:24:29.254 "params": { 00:24:29.254 "impl_name": "ssl", 00:24:29.254 "recv_buf_size": 4096, 00:24:29.254 "send_buf_size": 4096, 00:24:29.255 "enable_recv_pipe": true, 00:24:29.255 "enable_quickack": false, 00:24:29.255 "enable_placement_id": 0, 00:24:29.255 "enable_zerocopy_send_server": true, 00:24:29.255 "enable_zerocopy_send_client": false, 00:24:29.255 "zerocopy_threshold": 0, 00:24:29.255 "tls_version": 0, 00:24:29.255 "enable_ktls": false 00:24:29.255 } 00:24:29.255 }, 00:24:29.255 { 00:24:29.255 "method": "sock_impl_set_options", 00:24:29.255 "params": { 00:24:29.255 "impl_name": "posix", 00:24:29.255 "recv_buf_size": 2097152, 00:24:29.255 "send_buf_size": 2097152, 00:24:29.255 "enable_recv_pipe": true, 00:24:29.255 "enable_quickack": false, 00:24:29.255 "enable_placement_id": 0, 00:24:29.255 "enable_zerocopy_send_server": true, 00:24:29.255 "enable_zerocopy_send_client": false, 00:24:29.255 "zerocopy_threshold": 0, 00:24:29.255 "tls_version": 0, 00:24:29.255 "enable_ktls": false 00:24:29.255 } 00:24:29.255 } 00:24:29.255 ] 00:24:29.255 }, 00:24:29.255 { 00:24:29.255 "subsystem": "vmd", 00:24:29.255 "config": [] 00:24:29.255 }, 00:24:29.255 { 00:24:29.255 "subsystem": "accel", 00:24:29.255 "config": [ 00:24:29.255 { 00:24:29.255 "method": "accel_set_options", 00:24:29.255 "params": { 00:24:29.255 "small_cache_size": 128, 00:24:29.255 "large_cache_size": 16, 00:24:29.255 "task_count": 2048, 00:24:29.255 "sequence_count": 2048, 00:24:29.255 "buf_count": 2048 00:24:29.255 } 00:24:29.255 } 00:24:29.255 ] 00:24:29.255 }, 00:24:29.255 { 00:24:29.255 "subsystem": "bdev", 00:24:29.255 "config": [ 00:24:29.255 { 00:24:29.255 "method": "bdev_set_options", 00:24:29.255 "params": { 00:24:29.255 "bdev_io_pool_size": 65535, 00:24:29.255 "bdev_io_cache_size": 256, 00:24:29.255 "bdev_auto_examine": true, 00:24:29.255 "iobuf_small_cache_size": 128, 00:24:29.255 "iobuf_large_cache_size": 16 00:24:29.255 } 00:24:29.255 }, 00:24:29.255 { 00:24:29.255 "method": "bdev_raid_set_options", 00:24:29.255 "params": { 00:24:29.255 "process_window_size_kb": 1024, 00:24:29.255 "process_max_bandwidth_mb_sec": 0 00:24:29.255 } 00:24:29.255 }, 00:24:29.255 { 00:24:29.255 "method": "bdev_iscsi_set_options", 00:24:29.255 "params": { 00:24:29.255 "timeout_sec": 30 00:24:29.255 } 00:24:29.255 }, 00:24:29.255 { 00:24:29.255 "method": "bdev_nvme_set_options", 00:24:29.255 "params": { 00:24:29.255 "action_on_timeout": "none", 00:24:29.255 "timeout_us": 0, 00:24:29.255 "timeout_admin_us": 0, 00:24:29.255 "keep_alive_timeout_ms": 10000, 00:24:29.255 "arbitration_burst": 0, 00:24:29.255 "low_priority_weight": 0, 00:24:29.255 "medium_priority_weight": 0, 00:24:29.255 "high_priority_weight": 0, 00:24:29.255 "nvme_adminq_poll_period_us": 10000, 00:24:29.255 "nvme_ioq_poll_period_us": 0, 00:24:29.255 "io_queue_requests": 512, 00:24:29.255 "delay_cmd_submit": true, 00:24:29.255 "transport_retry_count": 4, 00:24:29.255 "bdev_retry_count": 3, 00:24:29.255 "transport_ack_timeout": 0, 00:24:29.255 "ctrlr_loss_timeout_sec": 0, 00:24:29.255 "reconnect_delay_sec": 0, 00:24:29.255 "fast_io_fail_timeout_sec": 0, 00:24:29.255 "disable_auto_failback": false, 00:24:29.255 "generate_uuids": false, 00:24:29.255 "transport_tos": 0, 00:24:29.255 "nvme_error_stat": false, 00:24:29.255 "rdma_srq_size": 0, 00:24:29.255 "io_path_stat": false, 00:24:29.255 "allow_accel_sequence": false, 00:24:29.255 "rdma_max_cq_size": 0, 00:24:29.255 "rdma_cm_event_timeout_ms": 0, 00:24:29.255 "dhchap_digests": [ 00:24:29.255 "sha256", 00:24:29.255 "sha384", 00:24:29.255 "sha512" 00:24:29.255 ], 00:24:29.255 "dhchap_dhgroups": [ 00:24:29.255 "null", 00:24:29.255 "ffdhe2048", 00:24:29.255 "ffdhe3072", 00:24:29.255 "ffdhe4096", 00:24:29.255 "ffdhe6144", 00:24:29.255 "ffdhe8192" 00:24:29.255 ] 00:24:29.255 } 00:24:29.255 }, 00:24:29.255 { 00:24:29.255 "method": "bdev_nvme_attach_controller", 00:24:29.255 "params": { 00:24:29.255 "name": "nvme0", 00:24:29.255 "trtype": "TCP", 00:24:29.255 "adrfam": "IPv4", 00:24:29.255 "traddr": "10.0.0.2", 00:24:29.255 "trsvcid": "4420", 00:24:29.255 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:29.255 "prchk_reftag": false, 00:24:29.255 "prchk_guard": false, 00:24:29.255 "ctrlr_loss_timeout_sec": 0, 00:24:29.255 "reconnect_delay_sec": 0, 00:24:29.255 "fast_io_fail_timeout_sec": 0, 00:24:29.255 "psk": "key0", 00:24:29.255 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:29.255 "hdgst": false, 00:24:29.255 "ddgst": false, 00:24:29.255 "multipath": "multipath" 00:24:29.255 } 00:24:29.255 }, 00:24:29.255 { 00:24:29.255 "method": "bdev_nvme_set_hotplug", 00:24:29.255 "params": { 00:24:29.255 "period_us": 100000, 00:24:29.255 "enable": false 00:24:29.255 } 00:24:29.255 }, 00:24:29.255 { 00:24:29.255 "method": "bdev_enable_histogram", 00:24:29.255 "params": { 00:24:29.255 "name": "nvme0n1", 00:24:29.255 "enable": true 00:24:29.255 } 00:24:29.255 }, 00:24:29.255 { 00:24:29.255 "method": "bdev_wait_for_examine" 00:24:29.255 } 00:24:29.255 ] 00:24:29.255 }, 00:24:29.255 { 00:24:29.255 "subsystem": "nbd", 00:24:29.255 "config": [] 00:24:29.255 } 00:24:29.255 ] 00:24:29.255 }' 00:24:29.255 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3006608 00:24:29.255 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3006608 ']' 00:24:29.255 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3006608 00:24:29.255 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:29.255 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.255 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3006608 00:24:29.255 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:29.255 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:29.255 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3006608' 00:24:29.255 killing process with pid 3006608 00:24:29.255 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3006608 00:24:29.255 Received shutdown signal, test time was about 1.000000 seconds 00:24:29.255 00:24:29.255 Latency(us) 00:24:29.255 [2024-11-17T01:44:37.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.255 [2024-11-17T01:44:37.715Z] =================================================================================================================== 00:24:29.255 [2024-11-17T01:44:37.715Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:29.255 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3006608 00:24:30.189 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3006450 00:24:30.189 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3006450 ']' 00:24:30.189 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3006450 00:24:30.189 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:30.189 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:30.189 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3006450 00:24:30.189 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:30.189 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:30.189 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3006450' 00:24:30.189 killing process with pid 3006450 00:24:30.189 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3006450 00:24:30.189 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3006450 00:24:31.563 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:31.563 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:31.563 "subsystems": [ 00:24:31.563 { 00:24:31.563 "subsystem": "keyring", 00:24:31.563 "config": [ 00:24:31.563 { 00:24:31.563 "method": "keyring_file_add_key", 00:24:31.563 "params": { 00:24:31.563 "name": "key0", 00:24:31.563 "path": "/tmp/tmp.B8nRj0KC6p" 00:24:31.563 } 00:24:31.563 } 00:24:31.564 ] 00:24:31.564 }, 00:24:31.564 { 00:24:31.564 "subsystem": "iobuf", 00:24:31.564 "config": [ 00:24:31.564 { 00:24:31.564 "method": "iobuf_set_options", 00:24:31.564 "params": { 00:24:31.564 "small_pool_count": 8192, 00:24:31.564 "large_pool_count": 1024, 00:24:31.564 "small_bufsize": 8192, 00:24:31.564 "large_bufsize": 135168, 00:24:31.564 "enable_numa": false 00:24:31.564 } 00:24:31.564 } 00:24:31.564 ] 00:24:31.564 }, 00:24:31.564 { 00:24:31.564 "subsystem": "sock", 00:24:31.564 "config": [ 00:24:31.564 { 00:24:31.564 "method": "sock_set_default_impl", 00:24:31.564 "params": { 00:24:31.564 "impl_name": "posix" 00:24:31.564 } 00:24:31.564 }, 00:24:31.564 { 00:24:31.564 "method": "sock_impl_set_options", 00:24:31.564 "params": { 00:24:31.564 "impl_name": "ssl", 00:24:31.564 "recv_buf_size": 4096, 00:24:31.564 "send_buf_size": 4096, 00:24:31.564 "enable_recv_pipe": true, 00:24:31.564 "enable_quickack": false, 00:24:31.564 "enable_placement_id": 0, 00:24:31.564 "enable_zerocopy_send_server": true, 00:24:31.564 "enable_zerocopy_send_client": false, 00:24:31.564 "zerocopy_threshold": 0, 00:24:31.564 "tls_version": 0, 00:24:31.564 "enable_ktls": false 00:24:31.564 } 00:24:31.564 }, 00:24:31.564 { 00:24:31.564 "method": "sock_impl_set_options", 00:24:31.564 "params": { 00:24:31.564 "impl_name": "posix", 00:24:31.564 "recv_buf_size": 2097152, 00:24:31.564 "send_buf_size": 2097152, 00:24:31.564 "enable_recv_pipe": true, 00:24:31.564 "enable_quickack": false, 00:24:31.564 "enable_placement_id": 0, 00:24:31.564 "enable_zerocopy_send_server": true, 00:24:31.564 "enable_zerocopy_send_client": false, 00:24:31.564 "zerocopy_threshold": 0, 00:24:31.564 "tls_version": 0, 00:24:31.564 "enable_ktls": false 00:24:31.564 } 00:24:31.564 } 00:24:31.564 ] 00:24:31.564 }, 00:24:31.564 { 00:24:31.564 "subsystem": "vmd", 00:24:31.564 "config": [] 00:24:31.564 }, 00:24:31.564 { 00:24:31.564 "subsystem": "accel", 00:24:31.564 "config": [ 00:24:31.564 { 00:24:31.564 "method": "accel_set_options", 00:24:31.564 "params": { 00:24:31.564 "small_cache_size": 128, 00:24:31.564 "large_cache_size": 16, 00:24:31.564 "task_count": 2048, 00:24:31.564 "sequence_count": 2048, 00:24:31.564 "buf_count": 2048 00:24:31.564 } 00:24:31.564 } 00:24:31.564 ] 00:24:31.564 }, 00:24:31.564 { 00:24:31.564 "subsystem": "bdev", 00:24:31.564 "config": [ 00:24:31.564 { 00:24:31.564 "method": "bdev_set_options", 00:24:31.564 "params": { 00:24:31.564 "bdev_io_pool_size": 65535, 00:24:31.564 "bdev_io_cache_size": 256, 00:24:31.564 "bdev_auto_examine": true, 00:24:31.564 "iobuf_small_cache_size": 128, 00:24:31.564 "iobuf_large_cache_size": 16 00:24:31.564 } 00:24:31.564 }, 00:24:31.564 { 00:24:31.564 "method": "bdev_raid_set_options", 00:24:31.564 "params": { 00:24:31.564 "process_window_size_kb": 1024, 00:24:31.564 "process_max_bandwidth_mb_sec": 0 00:24:31.564 } 00:24:31.564 }, 00:24:31.564 { 00:24:31.564 "method": "bdev_iscsi_set_options", 00:24:31.564 "params": { 00:24:31.564 "timeout_sec": 30 00:24:31.564 } 00:24:31.564 }, 00:24:31.564 { 00:24:31.564 "method": "bdev_nvme_set_options", 00:24:31.564 "params": { 00:24:31.564 "action_on_timeout": "none", 00:24:31.564 "timeout_us": 0, 00:24:31.564 "timeout_admin_us": 0, 00:24:31.564 "keep_alive_timeout_ms": 10000, 00:24:31.564 "arbitration_burst": 0, 00:24:31.564 "low_priority_weight": 0, 00:24:31.564 "medium_priority_weight": 0, 00:24:31.564 "high_priority_weight": 0, 00:24:31.564 "nvme_adminq_poll_period_us": 10000, 00:24:31.564 "nvme_ioq_poll_period_us": 0, 00:24:31.564 "io_queue_requests": 0, 00:24:31.564 "delay_cmd_submit": true, 00:24:31.564 "transport_retry_count": 4, 00:24:31.564 "bdev_retry_count": 3, 00:24:31.564 "transport_ack_timeout": 0, 00:24:31.564 "ctrlr_loss_timeout_sec": 0, 00:24:31.564 "reconnect_delay_sec": 0, 00:24:31.564 "fast_io_fail_timeout_sec": 0, 00:24:31.564 "disable_auto_failback": false, 00:24:31.564 "generate_uuids": false, 00:24:31.564 "transport_tos": 0, 00:24:31.564 "nvme_error_stat": false, 00:24:31.564 "rdma_srq_size": 0, 00:24:31.564 "io_path_stat": false, 00:24:31.564 "allow_accel_sequence": false, 00:24:31.564 "rdma_max_cq_size": 0, 00:24:31.564 "rdma_cm_event_timeout_ms": 0, 00:24:31.564 "dhchap_digests": [ 00:24:31.564 "sha256", 00:24:31.564 "sha384", 00:24:31.564 "sha512" 00:24:31.564 ], 00:24:31.564 "dhchap_dhgroups": [ 00:24:31.564 "null", 00:24:31.564 "ffdhe2048", 00:24:31.564 "ffdhe3072", 00:24:31.564 "ffdhe4096", 00:24:31.564 "ffdhe6144", 00:24:31.564 "ffdhe8192" 00:24:31.564 ] 00:24:31.564 } 00:24:31.564 }, 00:24:31.564 { 00:24:31.564 "method": "bdev_nvme_set_hotplug", 00:24:31.564 "params": { 00:24:31.564 "period_us": 100000, 00:24:31.564 "enable": false 00:24:31.564 } 00:24:31.564 }, 00:24:31.564 { 00:24:31.564 "method": "bdev_malloc_create", 00:24:31.564 "params": { 00:24:31.564 "name": "malloc0", 00:24:31.564 "num_blocks": 8192, 00:24:31.564 "block_size": 4096, 00:24:31.564 "physical_block_size": 4096, 00:24:31.564 "uuid": "4e242b87-60f8-40db-ade9-1ca0207d61db", 00:24:31.564 "optimal_io_boundary": 0, 00:24:31.564 "md_size": 0, 00:24:31.564 "dif_type": 0, 00:24:31.564 "dif_is_head_of_md": false, 00:24:31.564 "dif_pi_format": 0 00:24:31.564 } 00:24:31.564 }, 00:24:31.564 { 00:24:31.564 "method": "bdev_wait_for_examine" 00:24:31.564 } 00:24:31.564 ] 00:24:31.564 }, 00:24:31.564 { 00:24:31.564 "subsystem": "nbd", 00:24:31.564 "config": [] 00:24:31.564 }, 00:24:31.564 { 00:24:31.564 "subsystem": "scheduler", 00:24:31.564 "config": [ 00:24:31.564 { 00:24:31.564 "method": "framework_set_scheduler", 00:24:31.564 "params": { 00:24:31.564 "name": "static" 00:24:31.564 } 00:24:31.564 } 00:24:31.564 ] 00:24:31.564 }, 00:24:31.564 { 00:24:31.564 "subsystem": "nvmf", 00:24:31.564 "config": [ 00:24:31.564 { 00:24:31.564 "method": "nvmf_set_config", 00:24:31.564 "params": { 00:24:31.564 "discovery_filter": "match_any", 00:24:31.564 "admin_cmd_passthru": { 00:24:31.564 "identify_ctrlr": false 00:24:31.564 }, 00:24:31.564 "dhchap_digests": [ 00:24:31.564 "sha256", 00:24:31.564 "sha384", 00:24:31.564 "sha512" 00:24:31.564 ], 00:24:31.564 "dhchap_dhgroups": [ 00:24:31.564 "null", 00:24:31.564 "ffdhe2048", 00:24:31.564 "ffdhe3072", 00:24:31.564 "ffdhe4096", 00:24:31.564 "ffdhe6144", 00:24:31.564 "ffdhe8192" 00:24:31.564 ] 00:24:31.564 } 00:24:31.564 }, 00:24:31.564 { 00:24:31.564 "method": "nvmf_set_max_subsystems", 00:24:31.564 "params": { 00:24:31.564 "max_subsystems": 1024 00:24:31.564 } 00:24:31.564 }, 00:24:31.564 { 00:24:31.564 "method": "nvmf_set_crdt", 00:24:31.564 "params": { 00:24:31.564 "crdt1": 0, 00:24:31.564 "crdt2": 0, 00:24:31.564 "crdt3": 0 00:24:31.564 } 00:24:31.564 }, 00:24:31.564 { 00:24:31.564 "method": "nvmf_create_transport", 00:24:31.564 "params": { 00:24:31.564 "trtype": "TCP", 00:24:31.564 "max_queue_depth": 128, 00:24:31.564 "max_io_qpairs_per_ctrlr": 127, 00:24:31.564 "in_capsule_data_size": 4096, 00:24:31.564 "max_io_size": 131072, 00:24:31.564 "io_unit_size": 131072, 00:24:31.564 "max_aq_depth": 128, 00:24:31.564 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:31.564 "num_shared_buffers": 511, 00:24:31.564 "buf_cache_size": 4294967295, 00:24:31.564 "dif_insert_or_strip": false, 00:24:31.564 "zcopy": false, 00:24:31.564 "c2h_success": false, 00:24:31.564 "sock_priority": 0, 00:24:31.564 "abort_timeout_sec": 1, 00:24:31.564 "ack_timeout": 0, 00:24:31.564 "data_wr_pool_size": 0 00:24:31.564 } 00:24:31.564 }, 00:24:31.564 { 00:24:31.564 "method": "nvmf_create_subsystem", 00:24:31.564 "params": { 00:24:31.564 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:31.565 "allow_any_host": false, 00:24:31.565 "serial_number": "00000000000000000000", 00:24:31.565 "model_number": "SPDK bdev Controller", 00:24:31.565 "max_namespaces": 32, 00:24:31.565 "min_cntlid": 1, 00:24:31.565 "max_cntlid": 65519, 00:24:31.565 "ana_reporting": false 00:24:31.565 } 00:24:31.565 }, 00:24:31.565 { 00:24:31.565 "method": "nvmf_subsystem_add_host", 00:24:31.565 "params": { 00:24:31.565 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:31.565 "host": "nqn.2016-06.io.spdk:host1", 00:24:31.565 "psk": "key0" 00:24:31.565 } 00:24:31.565 }, 00:24:31.565 { 00:24:31.565 "method": "nvmf_subsystem_add_ns", 00:24:31.565 "params": { 00:24:31.565 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:31.565 "namespace": { 00:24:31.565 "nsid": 1, 00:24:31.565 "bdev_name": "malloc0", 00:24:31.565 "nguid": "4E242B8760F840DBADE91CA0207D61DB", 00:24:31.565 "uuid": "4e242b87-60f8-40db-ade9-1ca0207d61db", 00:24:31.565 "no_auto_visible": false 00:24:31.565 } 00:24:31.565 } 00:24:31.565 }, 00:24:31.565 { 00:24:31.565 "method": "nvmf_subsystem_add_listener", 00:24:31.565 "params": { 00:24:31.565 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:31.565 "listen_address": { 00:24:31.565 "trtype": "TCP", 00:24:31.565 "adrfam": "IPv4", 00:24:31.565 "traddr": "10.0.0.2", 00:24:31.565 "trsvcid": "4420" 00:24:31.565 }, 00:24:31.565 "secure_channel": false, 00:24:31.565 "sock_impl": "ssl" 00:24:31.565 } 00:24:31.565 } 00:24:31.565 ] 00:24:31.565 } 00:24:31.565 ] 00:24:31.565 }' 00:24:31.565 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:31.565 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.565 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3007281 00:24:31.565 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:31.565 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3007281 00:24:31.565 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3007281 ']' 00:24:31.565 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.565 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.565 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.565 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.565 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.565 [2024-11-17 02:44:39.732573] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:31.565 [2024-11-17 02:44:39.732735] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.565 [2024-11-17 02:44:39.894265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.823 [2024-11-17 02:44:40.033036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.823 [2024-11-17 02:44:40.033120] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.823 [2024-11-17 02:44:40.033157] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.823 [2024-11-17 02:44:40.033181] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.823 [2024-11-17 02:44:40.033201] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.823 [2024-11-17 02:44:40.034905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.390 [2024-11-17 02:44:40.571432] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.390 [2024-11-17 02:44:40.603489] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:32.390 [2024-11-17 02:44:40.603794] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.390 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.390 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:32.390 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:32.390 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:32.390 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.390 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.390 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3007432 00:24:32.390 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3007432 /var/tmp/bdevperf.sock 00:24:32.390 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3007432 ']' 00:24:32.390 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:32.390 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.390 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:32.390 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:32.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:32.390 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.390 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:32.390 "subsystems": [ 00:24:32.390 { 00:24:32.390 "subsystem": "keyring", 00:24:32.390 "config": [ 00:24:32.390 { 00:24:32.390 "method": "keyring_file_add_key", 00:24:32.390 "params": { 00:24:32.390 "name": "key0", 00:24:32.390 "path": "/tmp/tmp.B8nRj0KC6p" 00:24:32.390 } 00:24:32.390 } 00:24:32.390 ] 00:24:32.390 }, 00:24:32.390 { 00:24:32.390 "subsystem": "iobuf", 00:24:32.390 "config": [ 00:24:32.390 { 00:24:32.390 "method": "iobuf_set_options", 00:24:32.390 "params": { 00:24:32.390 "small_pool_count": 8192, 00:24:32.390 "large_pool_count": 1024, 00:24:32.390 "small_bufsize": 8192, 00:24:32.390 "large_bufsize": 135168, 00:24:32.390 "enable_numa": false 00:24:32.390 } 00:24:32.390 } 00:24:32.390 ] 00:24:32.390 }, 00:24:32.390 { 00:24:32.390 "subsystem": "sock", 00:24:32.390 "config": [ 00:24:32.390 { 00:24:32.390 "method": "sock_set_default_impl", 00:24:32.390 "params": { 00:24:32.390 "impl_name": "posix" 00:24:32.390 } 00:24:32.390 }, 00:24:32.390 { 00:24:32.390 "method": "sock_impl_set_options", 00:24:32.390 "params": { 00:24:32.390 "impl_name": "ssl", 00:24:32.390 "recv_buf_size": 4096, 00:24:32.390 "send_buf_size": 4096, 00:24:32.390 "enable_recv_pipe": true, 00:24:32.390 "enable_quickack": false, 00:24:32.390 "enable_placement_id": 0, 00:24:32.390 "enable_zerocopy_send_server": true, 00:24:32.390 "enable_zerocopy_send_client": false, 00:24:32.390 "zerocopy_threshold": 0, 00:24:32.390 "tls_version": 0, 00:24:32.390 "enable_ktls": false 00:24:32.390 } 00:24:32.390 }, 00:24:32.391 { 00:24:32.391 "method": "sock_impl_set_options", 00:24:32.391 "params": { 00:24:32.391 "impl_name": "posix", 00:24:32.391 "recv_buf_size": 2097152, 00:24:32.391 "send_buf_size": 2097152, 00:24:32.391 "enable_recv_pipe": true, 00:24:32.391 "enable_quickack": false, 00:24:32.391 "enable_placement_id": 0, 00:24:32.391 "enable_zerocopy_send_server": true, 00:24:32.391 "enable_zerocopy_send_client": false, 00:24:32.391 "zerocopy_threshold": 0, 00:24:32.391 "tls_version": 0, 00:24:32.391 "enable_ktls": false 00:24:32.391 } 00:24:32.391 } 00:24:32.391 ] 00:24:32.391 }, 00:24:32.391 { 00:24:32.391 "subsystem": "vmd", 00:24:32.391 "config": [] 00:24:32.391 }, 00:24:32.391 { 00:24:32.391 "subsystem": "accel", 00:24:32.391 "config": [ 00:24:32.391 { 00:24:32.391 "method": "accel_set_options", 00:24:32.391 "params": { 00:24:32.391 "small_cache_size": 128, 00:24:32.391 "large_cache_size": 16, 00:24:32.391 "task_count": 2048, 00:24:32.391 "sequence_count": 2048, 00:24:32.391 "buf_count": 2048 00:24:32.391 } 00:24:32.391 } 00:24:32.391 ] 00:24:32.391 }, 00:24:32.391 { 00:24:32.391 "subsystem": "bdev", 00:24:32.391 "config": [ 00:24:32.391 { 00:24:32.391 "method": "bdev_set_options", 00:24:32.391 "params": { 00:24:32.391 "bdev_io_pool_size": 65535, 00:24:32.391 "bdev_io_cache_size": 256, 00:24:32.391 "bdev_auto_examine": true, 00:24:32.391 "iobuf_small_cache_size": 128, 00:24:32.391 "iobuf_large_cache_size": 16 00:24:32.391 } 00:24:32.391 }, 00:24:32.391 { 00:24:32.391 "method": "bdev_raid_set_options", 00:24:32.391 "params": { 00:24:32.391 "process_window_size_kb": 1024, 00:24:32.391 "process_max_bandwidth_mb_sec": 0 00:24:32.391 } 00:24:32.391 }, 00:24:32.391 { 00:24:32.391 "method": "bdev_iscsi_set_options", 00:24:32.391 "params": { 00:24:32.391 "timeout_sec": 30 00:24:32.391 } 00:24:32.391 }, 00:24:32.391 { 00:24:32.391 "method": "bdev_nvme_set_options", 00:24:32.391 "params": { 00:24:32.391 "action_on_timeout": "none", 00:24:32.391 "timeout_us": 0, 00:24:32.391 "timeout_admin_us": 0, 00:24:32.391 "keep_alive_timeout_ms": 10000, 00:24:32.391 "arbitration_burst": 0, 00:24:32.391 "low_priority_weight": 0, 00:24:32.391 "medium_priority_weight": 0, 00:24:32.391 "high_priority_weight": 0, 00:24:32.391 "nvme_adminq_poll_period_us": 10000, 00:24:32.391 "nvme_ioq_poll_period_us": 0, 00:24:32.391 "io_queue_requests": 512, 00:24:32.391 "delay_cmd_submit": true, 00:24:32.391 "transport_retry_count": 4, 00:24:32.391 "bdev_retry_count": 3, 00:24:32.391 "transport_ack_timeout": 0, 00:24:32.391 "ctrlr_loss_timeout_sec": 0, 00:24:32.391 "reconnect_delay_sec": 0, 00:24:32.391 "fast_io_fail_timeout_sec": 0, 00:24:32.391 "disable_auto_failback": false, 00:24:32.391 "generate_uuids": false, 00:24:32.391 "transport_tos": 0, 00:24:32.391 "nvme_error_stat": false, 00:24:32.391 "rdma_srq_size": 0, 00:24:32.391 "io_path_stat": false, 00:24:32.391 "allow_accel_sequence": false, 00:24:32.391 "rdma_max_cq_size": 0, 00:24:32.391 "rdma_cm_event_timeout_ms": 0, 00:24:32.391 "dhchap_digests": [ 00:24:32.391 "sha256", 00:24:32.391 "sha384", 00:24:32.391 "sha512" 00:24:32.391 ], 00:24:32.391 "dhchap_dhgroups": [ 00:24:32.391 "null", 00:24:32.391 "ffdhe2048", 00:24:32.391 "ffdhe3072", 00:24:32.391 "ffdhe4096", 00:24:32.391 "ffdhe6144", 00:24:32.391 "ffdhe8192" 00:24:32.391 ] 00:24:32.391 } 00:24:32.391 }, 00:24:32.391 { 00:24:32.391 "method": "bdev_nvme_attach_controller", 00:24:32.391 "params": { 00:24:32.391 "name": "nvme0", 00:24:32.391 "trtype": "TCP", 00:24:32.391 "adrfam": "IPv4", 00:24:32.391 "traddr": "10.0.0.2", 00:24:32.391 "trsvcid": "4420", 00:24:32.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.391 "prchk_reftag": false, 00:24:32.391 "prchk_guard": false, 00:24:32.391 "ctrlr_loss_timeout_sec": 0, 00:24:32.391 "reconnect_delay_sec": 0, 00:24:32.391 "fast_io_fail_timeout_sec": 0, 00:24:32.391 "psk": "key0", 00:24:32.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:32.391 "hdgst": false, 00:24:32.391 "ddgst": false, 00:24:32.391 "multipath": "multipath" 00:24:32.391 } 00:24:32.391 }, 00:24:32.391 { 00:24:32.391 "method": "bdev_nvme_set_hotplug", 00:24:32.391 "params": { 00:24:32.391 "period_us": 100000, 00:24:32.391 "enable": false 00:24:32.391 } 00:24:32.391 }, 00:24:32.391 { 00:24:32.391 "method": "bdev_enable_histogram", 00:24:32.391 "params": { 00:24:32.391 "name": "nvme0n1", 00:24:32.391 "enable": true 00:24:32.391 } 00:24:32.391 }, 00:24:32.391 { 00:24:32.391 "method": "bdev_wait_for_examine" 00:24:32.391 } 00:24:32.391 ] 00:24:32.391 }, 00:24:32.391 { 00:24:32.391 "subsystem": "nbd", 00:24:32.391 "config": [] 00:24:32.391 } 00:24:32.391 ] 00:24:32.391 }' 00:24:32.391 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.391 [2024-11-17 02:44:40.779729] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:32.391 [2024-11-17 02:44:40.779877] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3007432 ] 00:24:32.650 [2024-11-17 02:44:40.926439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.650 [2024-11-17 02:44:41.063948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.256 [2024-11-17 02:44:41.503607] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:33.538 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:33.539 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:33.539 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:33.539 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:33.797 02:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.797 02:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:33.797 Running I/O for 1 seconds... 00:24:34.732 2056.00 IOPS, 8.03 MiB/s 00:24:34.732 Latency(us) 00:24:34.732 [2024-11-17T01:44:43.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.732 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:34.732 Verification LBA range: start 0x0 length 0x2000 00:24:34.732 nvme0n1 : 1.05 2077.81 8.12 0.00 0.00 60262.87 8689.59 55535.69 00:24:34.732 [2024-11-17T01:44:43.192Z] =================================================================================================================== 00:24:34.732 [2024-11-17T01:44:43.192Z] Total : 2077.81 8.12 0.00 0.00 60262.87 8689.59 55535.69 00:24:34.732 { 00:24:34.732 "results": [ 00:24:34.732 { 00:24:34.732 "job": "nvme0n1", 00:24:34.732 "core_mask": "0x2", 00:24:34.732 "workload": "verify", 00:24:34.732 "status": "finished", 00:24:34.732 "verify_range": { 00:24:34.732 "start": 0, 00:24:34.732 "length": 8192 00:24:34.732 }, 00:24:34.732 "queue_depth": 128, 00:24:34.732 "io_size": 4096, 00:24:34.732 "runtime": 1.051109, 00:24:34.732 "iops": 2077.8054416811196, 00:24:34.732 "mibps": 8.116427506566874, 00:24:34.732 "io_failed": 0, 00:24:34.732 "io_timeout": 0, 00:24:34.732 "avg_latency_us": 60262.86816985484, 00:24:34.732 "min_latency_us": 8689.588148148148, 00:24:34.732 "max_latency_us": 55535.69185185185 00:24:34.732 } 00:24:34.732 ], 00:24:34.732 "core_count": 1 00:24:34.732 } 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:34.990 nvmf_trace.0 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3007432 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3007432 ']' 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3007432 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3007432 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3007432' 00:24:34.990 killing process with pid 3007432 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3007432 00:24:34.990 Received shutdown signal, test time was about 1.000000 seconds 00:24:34.990 00:24:34.990 Latency(us) 00:24:34.990 [2024-11-17T01:44:43.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.990 [2024-11-17T01:44:43.450Z] =================================================================================================================== 00:24:34.990 [2024-11-17T01:44:43.450Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:34.990 02:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3007432 00:24:35.925 02:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:35.925 02:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:35.925 02:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:35.925 02:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:35.925 02:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:35.925 02:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:35.925 02:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:35.925 rmmod nvme_tcp 00:24:35.925 rmmod nvme_fabrics 00:24:35.925 rmmod nvme_keyring 00:24:35.925 02:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:35.925 02:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:35.925 02:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:35.925 02:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3007281 ']' 00:24:35.925 02:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3007281 00:24:35.925 02:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3007281 ']' 00:24:35.925 02:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3007281 00:24:35.925 02:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:35.925 02:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:35.925 02:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3007281 00:24:35.925 02:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:35.925 02:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:35.925 02:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3007281' 00:24:35.925 killing process with pid 3007281 00:24:35.925 02:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3007281 00:24:35.925 02:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3007281 00:24:37.302 02:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:37.302 02:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:37.302 02:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:37.302 02:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:37.302 02:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:37.302 02:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:37.302 02:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:37.302 02:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:37.302 02:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:37.302 02:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.302 02:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.302 02:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.213 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:39.213 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.iVdewDSrbk /tmp/tmp.LFIn2WKkYx /tmp/tmp.B8nRj0KC6p 00:24:39.213 00:24:39.213 real 1m52.460s 00:24:39.213 user 3m6.388s 00:24:39.213 sys 0m26.936s 00:24:39.213 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:39.213 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.213 ************************************ 00:24:39.213 END TEST nvmf_tls 00:24:39.213 ************************************ 00:24:39.213 02:44:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:39.213 02:44:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:39.213 02:44:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:39.213 02:44:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:39.213 ************************************ 00:24:39.213 START TEST nvmf_fips 00:24:39.213 ************************************ 00:24:39.213 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:39.473 * Looking for test storage... 00:24:39.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:39.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.473 --rc genhtml_branch_coverage=1 00:24:39.473 --rc genhtml_function_coverage=1 00:24:39.473 --rc genhtml_legend=1 00:24:39.473 --rc geninfo_all_blocks=1 00:24:39.473 --rc geninfo_unexecuted_blocks=1 00:24:39.473 00:24:39.473 ' 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:39.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.473 --rc genhtml_branch_coverage=1 00:24:39.473 --rc genhtml_function_coverage=1 00:24:39.473 --rc genhtml_legend=1 00:24:39.473 --rc geninfo_all_blocks=1 00:24:39.473 --rc geninfo_unexecuted_blocks=1 00:24:39.473 00:24:39.473 ' 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:39.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.473 --rc genhtml_branch_coverage=1 00:24:39.473 --rc genhtml_function_coverage=1 00:24:39.473 --rc genhtml_legend=1 00:24:39.473 --rc geninfo_all_blocks=1 00:24:39.473 --rc geninfo_unexecuted_blocks=1 00:24:39.473 00:24:39.473 ' 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:39.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.473 --rc genhtml_branch_coverage=1 00:24:39.473 --rc genhtml_function_coverage=1 00:24:39.473 --rc genhtml_legend=1 00:24:39.473 --rc geninfo_all_blocks=1 00:24:39.473 --rc geninfo_unexecuted_blocks=1 00:24:39.473 00:24:39.473 ' 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.473 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:39.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:39.474 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:39.733 Error setting digest 00:24:39.733 40C2E19FC07F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:39.733 40C2E19FC07F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:39.733 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:39.733 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:39.733 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:39.733 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:39.733 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:39.733 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:39.733 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.733 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:39.733 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:39.733 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:39.733 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.733 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.733 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.733 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:39.733 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:39.733 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:39.733 02:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:41.632 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:41.632 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:41.632 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:41.632 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:41.632 02:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:41.632 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:41.632 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:41.632 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:41.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:41.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:24:41.891 00:24:41.891 --- 10.0.0.2 ping statistics --- 00:24:41.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.891 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:41.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:41.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:24:41.891 00:24:41.891 --- 10.0.0.1 ping statistics --- 00:24:41.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.891 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3009963 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3009963 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3009963 ']' 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:41.891 02:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:42.150 [2024-11-17 02:44:50.366495] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:42.150 [2024-11-17 02:44:50.366647] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.150 [2024-11-17 02:44:50.507789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.408 [2024-11-17 02:44:50.647592] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.408 [2024-11-17 02:44:50.647672] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.408 [2024-11-17 02:44:50.647697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.408 [2024-11-17 02:44:50.647721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.408 [2024-11-17 02:44:50.647741] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.408 [2024-11-17 02:44:50.649382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.975 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:42.975 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:42.975 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:42.975 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:42.975 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:42.975 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.975 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:42.975 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:42.975 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:42.975 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.rgF 00:24:42.975 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:42.975 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.rgF 00:24:42.975 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.rgF 00:24:42.975 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.rgF 00:24:42.975 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:43.233 [2024-11-17 02:44:51.546317] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.233 [2024-11-17 02:44:51.562321] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:43.233 [2024-11-17 02:44:51.562648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.233 malloc0 00:24:43.233 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:43.233 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3010120 00:24:43.233 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:43.233 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3010120 /var/tmp/bdevperf.sock 00:24:43.233 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3010120 ']' 00:24:43.233 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:43.233 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.233 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:43.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:43.233 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.233 02:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:43.492 [2024-11-17 02:44:51.775331] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:43.492 [2024-11-17 02:44:51.775476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3010120 ] 00:24:43.492 [2024-11-17 02:44:51.919340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.750 [2024-11-17 02:44:52.061180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.316 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:44.316 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:44.316 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.rgF 00:24:44.574 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:44.831 [2024-11-17 02:44:53.233036] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:45.088 TLSTESTn1 00:24:45.088 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:45.088 Running I/O for 10 seconds... 00:24:47.394 2522.00 IOPS, 9.85 MiB/s [2024-11-17T01:44:56.787Z] 2596.50 IOPS, 10.14 MiB/s [2024-11-17T01:44:57.721Z] 2627.00 IOPS, 10.26 MiB/s [2024-11-17T01:44:58.656Z] 2638.75 IOPS, 10.31 MiB/s [2024-11-17T01:44:59.591Z] 2634.80 IOPS, 10.29 MiB/s [2024-11-17T01:45:00.524Z] 2636.67 IOPS, 10.30 MiB/s [2024-11-17T01:45:01.901Z] 2637.00 IOPS, 10.30 MiB/s [2024-11-17T01:45:02.835Z] 2641.12 IOPS, 10.32 MiB/s [2024-11-17T01:45:03.770Z] 2643.44 IOPS, 10.33 MiB/s [2024-11-17T01:45:03.770Z] 2642.90 IOPS, 10.32 MiB/s 00:24:55.310 Latency(us) 00:24:55.310 [2024-11-17T01:45:03.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.310 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:55.310 Verification LBA range: start 0x0 length 0x2000 00:24:55.310 TLSTESTn1 : 10.03 2646.45 10.34 0.00 0.00 48262.61 11990.66 37476.88 00:24:55.310 [2024-11-17T01:45:03.770Z] =================================================================================================================== 00:24:55.310 [2024-11-17T01:45:03.770Z] Total : 2646.45 10.34 0.00 0.00 48262.61 11990.66 37476.88 00:24:55.310 { 00:24:55.310 "results": [ 00:24:55.310 { 00:24:55.310 "job": "TLSTESTn1", 00:24:55.310 "core_mask": "0x4", 00:24:55.310 "workload": "verify", 00:24:55.310 "status": "finished", 00:24:55.310 "verify_range": { 00:24:55.310 "start": 0, 00:24:55.311 "length": 8192 00:24:55.311 }, 00:24:55.311 "queue_depth": 128, 00:24:55.311 "io_size": 4096, 00:24:55.311 "runtime": 10.034969, 00:24:55.311 "iops": 2646.4456442267037, 00:24:55.311 "mibps": 10.337678297760561, 00:24:55.311 "io_failed": 0, 00:24:55.311 "io_timeout": 0, 00:24:55.311 "avg_latency_us": 48262.61243095564, 00:24:55.311 "min_latency_us": 11990.660740740741, 00:24:55.311 "max_latency_us": 37476.88296296296 00:24:55.311 } 00:24:55.311 ], 00:24:55.311 "core_count": 1 00:24:55.311 } 00:24:55.311 02:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:55.311 02:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:55.311 02:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:55.311 02:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:55.311 02:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:55.311 02:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:55.311 02:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:55.311 02:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:55.311 02:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:55.311 02:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:55.311 nvmf_trace.0 00:24:55.311 02:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:55.311 02:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3010120 00:24:55.311 02:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3010120 ']' 00:24:55.311 02:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3010120 00:24:55.311 02:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:55.311 02:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:55.311 02:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3010120 00:24:55.311 02:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:55.311 02:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:55.311 02:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3010120' 00:24:55.311 killing process with pid 3010120 00:24:55.311 02:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3010120 00:24:55.311 Received shutdown signal, test time was about 10.000000 seconds 00:24:55.311 00:24:55.311 Latency(us) 00:24:55.311 [2024-11-17T01:45:03.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.311 [2024-11-17T01:45:03.771Z] =================================================================================================================== 00:24:55.311 [2024-11-17T01:45:03.771Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:55.311 02:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3010120 00:24:56.246 02:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:56.246 02:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:56.246 02:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:56.246 02:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:56.246 02:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:56.246 02:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:56.246 02:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:56.246 rmmod nvme_tcp 00:24:56.246 rmmod nvme_fabrics 00:24:56.246 rmmod nvme_keyring 00:24:56.246 02:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:56.246 02:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:56.246 02:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:56.246 02:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3009963 ']' 00:24:56.246 02:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3009963 00:24:56.246 02:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3009963 ']' 00:24:56.246 02:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3009963 00:24:56.246 02:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:56.246 02:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:56.246 02:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3009963 00:24:56.246 02:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:56.246 02:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:56.246 02:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3009963' 00:24:56.246 killing process with pid 3009963 00:24:56.246 02:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3009963 00:24:56.246 02:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3009963 00:24:57.621 02:45:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:57.621 02:45:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:57.621 02:45:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:57.621 02:45:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:57.621 02:45:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:57.621 02:45:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:57.621 02:45:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:57.621 02:45:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:57.621 02:45:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:57.621 02:45:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.621 02:45:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.621 02:45:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.525 02:45:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:59.525 02:45:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.rgF 00:24:59.525 00:24:59.525 real 0m20.262s 00:24:59.525 user 0m27.762s 00:24:59.525 sys 0m5.253s 00:24:59.525 02:45:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:59.525 02:45:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:59.525 ************************************ 00:24:59.525 END TEST nvmf_fips 00:24:59.525 ************************************ 00:24:59.525 02:45:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:59.525 02:45:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:59.525 02:45:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:59.525 02:45:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:59.525 ************************************ 00:24:59.525 START TEST nvmf_control_msg_list 00:24:59.525 ************************************ 00:24:59.525 02:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:59.784 * Looking for test storage... 00:24:59.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:59.784 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:59.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.784 --rc genhtml_branch_coverage=1 00:24:59.784 --rc genhtml_function_coverage=1 00:24:59.784 --rc genhtml_legend=1 00:24:59.784 --rc geninfo_all_blocks=1 00:24:59.785 --rc geninfo_unexecuted_blocks=1 00:24:59.785 00:24:59.785 ' 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:59.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.785 --rc genhtml_branch_coverage=1 00:24:59.785 --rc genhtml_function_coverage=1 00:24:59.785 --rc genhtml_legend=1 00:24:59.785 --rc geninfo_all_blocks=1 00:24:59.785 --rc geninfo_unexecuted_blocks=1 00:24:59.785 00:24:59.785 ' 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:59.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.785 --rc genhtml_branch_coverage=1 00:24:59.785 --rc genhtml_function_coverage=1 00:24:59.785 --rc genhtml_legend=1 00:24:59.785 --rc geninfo_all_blocks=1 00:24:59.785 --rc geninfo_unexecuted_blocks=1 00:24:59.785 00:24:59.785 ' 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:59.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.785 --rc genhtml_branch_coverage=1 00:24:59.785 --rc genhtml_function_coverage=1 00:24:59.785 --rc genhtml_legend=1 00:24:59.785 --rc geninfo_all_blocks=1 00:24:59.785 --rc geninfo_unexecuted_blocks=1 00:24:59.785 00:24:59.785 ' 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:59.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:59.785 02:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:01.696 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:01.696 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:01.697 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:01.697 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:01.697 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:01.697 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:02.027 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:02.027 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:02.027 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:02.027 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:02.027 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:02.027 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:02.027 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:02.027 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:02.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:02.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:25:02.027 00:25:02.027 --- 10.0.0.2 ping statistics --- 00:25:02.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.027 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:25:02.027 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:02.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:02.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:25:02.027 00:25:02.027 --- 10.0.0.1 ping statistics --- 00:25:02.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.027 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:25:02.027 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:02.027 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:25:02.027 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:02.027 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:02.027 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:02.027 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:02.027 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:02.027 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:02.028 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:02.028 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:02.028 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:02.028 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:02.028 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:02.028 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3014380 00:25:02.028 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:02.028 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3014380 00:25:02.028 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3014380 ']' 00:25:02.028 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.028 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.028 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.028 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.028 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:02.028 [2024-11-17 02:45:10.357803] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:25:02.028 [2024-11-17 02:45:10.357955] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.286 [2024-11-17 02:45:10.511507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.286 [2024-11-17 02:45:10.647556] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.286 [2024-11-17 02:45:10.647644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.286 [2024-11-17 02:45:10.647675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.286 [2024-11-17 02:45:10.647700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.286 [2024-11-17 02:45:10.647720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.286 [2024-11-17 02:45:10.649331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:03.222 [2024-11-17 02:45:11.376009] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:03.222 Malloc0 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:03.222 [2024-11-17 02:45:11.447433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3014538 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3014539 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:03.222 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3014540 00:25:03.223 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:03.223 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3014538 00:25:03.223 [2024-11-17 02:45:11.567432] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:03.223 [2024-11-17 02:45:11.567967] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:03.223 [2024-11-17 02:45:11.576977] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:04.599 Initializing NVMe Controllers 00:25:04.599 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:04.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:04.599 Initialization complete. Launching workers. 00:25:04.599 ======================================================== 00:25:04.599 Latency(us) 00:25:04.599 Device Information : IOPS MiB/s Average min max 00:25:04.599 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40890.86 40597.47 41029.76 00:25:04.599 ======================================================== 00:25:04.599 Total : 25.00 0.10 40890.86 40597.47 41029.76 00:25:04.599 00:25:04.599 Initializing NVMe Controllers 00:25:04.599 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:04.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:04.599 Initialization complete. Launching workers. 00:25:04.599 ======================================================== 00:25:04.599 Latency(us) 00:25:04.599 Device Information : IOPS MiB/s Average min max 00:25:04.599 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3809.00 14.88 261.88 237.58 1448.55 00:25:04.599 ======================================================== 00:25:04.599 Total : 3809.00 14.88 261.88 237.58 1448.55 00:25:04.599 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3014539 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3014540 00:25:04.599 Initializing NVMe Controllers 00:25:04.599 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:04.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:04.599 Initialization complete. Launching workers. 00:25:04.599 ======================================================== 00:25:04.599 Latency(us) 00:25:04.599 Device Information : IOPS MiB/s Average min max 00:25:04.599 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40898.61 40751.33 41040.21 00:25:04.599 ======================================================== 00:25:04.599 Total : 25.00 0.10 40898.61 40751.33 41040.21 00:25:04.599 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:04.599 rmmod nvme_tcp 00:25:04.599 rmmod nvme_fabrics 00:25:04.599 rmmod nvme_keyring 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3014380 ']' 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3014380 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3014380 ']' 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3014380 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3014380 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3014380' 00:25:04.599 killing process with pid 3014380 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3014380 00:25:04.599 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3014380 00:25:05.973 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:05.973 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:05.973 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:05.973 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:05.973 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:25:05.973 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:25:05.973 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:05.973 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:05.973 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:05.973 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.973 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.973 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.877 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:07.877 00:25:07.877 real 0m8.320s 00:25:07.877 user 0m7.910s 00:25:07.877 sys 0m2.930s 00:25:07.877 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:07.877 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.877 ************************************ 00:25:07.877 END TEST nvmf_control_msg_list 00:25:07.877 ************************************ 00:25:07.877 02:45:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:07.877 02:45:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:07.877 02:45:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:07.877 02:45:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:07.877 ************************************ 00:25:07.877 START TEST nvmf_wait_for_buf 00:25:07.877 ************************************ 00:25:07.877 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:08.136 * Looking for test storage... 00:25:08.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:08.136 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:08.136 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:08.136 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:08.136 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:08.136 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:08.136 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:08.136 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:08.136 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:08.136 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:08.136 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:08.136 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:08.136 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:08.136 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:08.136 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:08.136 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:08.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.137 --rc genhtml_branch_coverage=1 00:25:08.137 --rc genhtml_function_coverage=1 00:25:08.137 --rc genhtml_legend=1 00:25:08.137 --rc geninfo_all_blocks=1 00:25:08.137 --rc geninfo_unexecuted_blocks=1 00:25:08.137 00:25:08.137 ' 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:08.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.137 --rc genhtml_branch_coverage=1 00:25:08.137 --rc genhtml_function_coverage=1 00:25:08.137 --rc genhtml_legend=1 00:25:08.137 --rc geninfo_all_blocks=1 00:25:08.137 --rc geninfo_unexecuted_blocks=1 00:25:08.137 00:25:08.137 ' 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:08.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.137 --rc genhtml_branch_coverage=1 00:25:08.137 --rc genhtml_function_coverage=1 00:25:08.137 --rc genhtml_legend=1 00:25:08.137 --rc geninfo_all_blocks=1 00:25:08.137 --rc geninfo_unexecuted_blocks=1 00:25:08.137 00:25:08.137 ' 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:08.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.137 --rc genhtml_branch_coverage=1 00:25:08.137 --rc genhtml_function_coverage=1 00:25:08.137 --rc genhtml_legend=1 00:25:08.137 --rc geninfo_all_blocks=1 00:25:08.137 --rc geninfo_unexecuted_blocks=1 00:25:08.137 00:25:08.137 ' 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:08.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:08.137 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:10.672 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:10.672 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:10.672 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:10.672 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:10.672 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:10.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:25:10.672 00:25:10.672 --- 10.0.0.2 ping statistics --- 00:25:10.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.673 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:10.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:25:10.673 00:25:10.673 --- 10.0.0.1 ping statistics --- 00:25:10.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.673 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3016752 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3016752 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3016752 ']' 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:10.673 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.673 [2024-11-17 02:45:18.811503] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:25:10.673 [2024-11-17 02:45:18.811661] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.673 [2024-11-17 02:45:18.979824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.673 [2024-11-17 02:45:19.119839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:10.673 [2024-11-17 02:45:19.119932] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:10.673 [2024-11-17 02:45:19.119957] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:10.673 [2024-11-17 02:45:19.119981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:10.673 [2024-11-17 02:45:19.120000] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:10.673 [2024-11-17 02:45:19.121630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.608 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:11.609 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:25:11.609 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:11.609 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:11.609 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.609 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.609 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:11.609 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:11.609 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:11.609 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.609 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.609 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.609 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:11.609 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.609 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.609 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.609 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:11.609 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.609 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.868 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.868 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:11.868 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.868 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.868 Malloc0 00:25:11.868 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.868 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:11.868 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.868 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.868 [2024-11-17 02:45:20.169389] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.868 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.868 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:11.868 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.868 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.868 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.868 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:11.868 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.868 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.868 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.868 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:11.868 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.868 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.868 [2024-11-17 02:45:20.193682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.868 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.868 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:12.126 [2024-11-17 02:45:20.349319] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:13.500 Initializing NVMe Controllers 00:25:13.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:13.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:13.500 Initialization complete. Launching workers. 00:25:13.500 ======================================================== 00:25:13.500 Latency(us) 00:25:13.500 Device Information : IOPS MiB/s Average min max 00:25:13.500 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 128.67 16.08 32189.95 7898.88 63862.21 00:25:13.500 ======================================================== 00:25:13.500 Total : 128.67 16.08 32189.95 7898.88 63862.21 00:25:13.500 00:25:13.500 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:13.500 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:13.500 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.500 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.500 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.500 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:25:13.500 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:25:13.500 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:13.500 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:13.500 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:13.500 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:13.500 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:13.500 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:13.500 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:13.500 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:13.500 rmmod nvme_tcp 00:25:13.500 rmmod nvme_fabrics 00:25:13.500 rmmod nvme_keyring 00:25:13.500 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:13.500 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:13.500 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:13.500 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3016752 ']' 00:25:13.500 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3016752 00:25:13.501 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3016752 ']' 00:25:13.501 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3016752 00:25:13.501 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:25:13.501 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:13.501 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3016752 00:25:13.501 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:13.501 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:13.501 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3016752' 00:25:13.501 killing process with pid 3016752 00:25:13.501 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3016752 00:25:13.501 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3016752 00:25:14.436 02:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:14.436 02:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:14.436 02:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:14.436 02:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:14.436 02:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:14.436 02:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:14.436 02:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:14.696 02:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:14.696 02:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:14.696 02:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.696 02:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.696 02:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.605 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:16.605 00:25:16.605 real 0m8.612s 00:25:16.605 user 0m5.125s 00:25:16.605 sys 0m2.259s 00:25:16.605 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:16.605 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:16.605 ************************************ 00:25:16.605 END TEST nvmf_wait_for_buf 00:25:16.605 ************************************ 00:25:16.605 02:45:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:16.605 02:45:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:16.605 02:45:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:16.605 02:45:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:16.605 02:45:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:16.605 ************************************ 00:25:16.605 START TEST nvmf_fuzz 00:25:16.605 ************************************ 00:25:16.605 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:16.605 * Looking for test storage... 00:25:16.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:16.605 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:16.605 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:25:16.605 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:16.864 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:16.864 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:16.864 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:16.864 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:16.864 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:16.864 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:16.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.865 --rc genhtml_branch_coverage=1 00:25:16.865 --rc genhtml_function_coverage=1 00:25:16.865 --rc genhtml_legend=1 00:25:16.865 --rc geninfo_all_blocks=1 00:25:16.865 --rc geninfo_unexecuted_blocks=1 00:25:16.865 00:25:16.865 ' 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:16.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.865 --rc genhtml_branch_coverage=1 00:25:16.865 --rc genhtml_function_coverage=1 00:25:16.865 --rc genhtml_legend=1 00:25:16.865 --rc geninfo_all_blocks=1 00:25:16.865 --rc geninfo_unexecuted_blocks=1 00:25:16.865 00:25:16.865 ' 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:16.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.865 --rc genhtml_branch_coverage=1 00:25:16.865 --rc genhtml_function_coverage=1 00:25:16.865 --rc genhtml_legend=1 00:25:16.865 --rc geninfo_all_blocks=1 00:25:16.865 --rc geninfo_unexecuted_blocks=1 00:25:16.865 00:25:16.865 ' 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:16.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.865 --rc genhtml_branch_coverage=1 00:25:16.865 --rc genhtml_function_coverage=1 00:25:16.865 --rc genhtml_legend=1 00:25:16.865 --rc geninfo_all_blocks=1 00:25:16.865 --rc geninfo_unexecuted_blocks=1 00:25:16.865 00:25:16.865 ' 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:16.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:16.865 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:16.866 02:45:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:18.770 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:18.770 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:18.770 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:18.770 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:18.770 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:19.029 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:19.029 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:19.029 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:19.029 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:19.029 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:19.029 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:19.029 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:19.029 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:19.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:19.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:25:19.030 00:25:19.030 --- 10.0.0.2 ping statistics --- 00:25:19.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.030 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:25:19.030 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:19.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:19.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:25:19.030 00:25:19.030 --- 10.0.0.1 ping statistics --- 00:25:19.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.030 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:25:19.030 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:19.030 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:25:19.030 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:19.030 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:19.030 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:19.030 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:19.030 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:19.030 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:19.030 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:19.030 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3019179 00:25:19.030 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:19.030 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:19.030 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3019179 00:25:19.030 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3019179 ']' 00:25:19.030 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.030 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:19.030 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.030 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:19.030 02:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:19.964 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:19.964 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:25:19.964 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:19.964 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.964 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:19.964 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.964 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:19.964 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.964 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:20.222 Malloc0 00:25:20.222 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.222 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:20.222 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.222 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:20.222 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.222 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:20.222 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.222 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:20.222 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.222 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:20.222 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.222 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:20.222 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.222 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:20.222 02:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:52.293 Fuzzing completed. Shutting down the fuzz application 00:25:52.293 00:25:52.293 Dumping successful admin opcodes: 00:25:52.293 8, 9, 10, 24, 00:25:52.293 Dumping successful io opcodes: 00:25:52.293 0, 9, 00:25:52.293 NS: 0x2000008efec0 I/O qp, Total commands completed: 327739, total successful commands: 1942, random_seed: 623416064 00:25:52.293 NS: 0x2000008efec0 admin qp, Total commands completed: 41280, total successful commands: 337, random_seed: 2975807872 00:25:52.293 02:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:52.860 Fuzzing completed. Shutting down the fuzz application 00:25:52.860 00:25:52.860 Dumping successful admin opcodes: 00:25:52.860 24, 00:25:52.860 Dumping successful io opcodes: 00:25:52.860 00:25:52.860 NS: 0x2000008efec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3537560688 00:25:52.860 NS: 0x2000008efec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3537782159 00:25:52.860 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:52.860 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.860 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:52.860 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.860 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:52.860 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:52.860 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:52.860 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:52.860 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:52.860 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:52.860 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:52.861 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:52.861 rmmod nvme_tcp 00:25:52.861 rmmod nvme_fabrics 00:25:52.861 rmmod nvme_keyring 00:25:52.861 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:52.861 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:52.861 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:52.861 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 3019179 ']' 00:25:52.861 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 3019179 00:25:52.861 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3019179 ']' 00:25:52.861 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 3019179 00:25:52.861 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:52.861 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.861 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3019179 00:25:52.861 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:52.861 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:52.861 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3019179' 00:25:52.861 killing process with pid 3019179 00:25:52.861 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 3019179 00:25:52.861 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 3019179 00:25:54.236 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:54.236 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:54.236 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:54.236 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:54.236 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:54.236 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:54.236 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:54.236 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:54.236 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:54.236 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.236 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.236 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:56.776 00:25:56.776 real 0m39.656s 00:25:56.776 user 0m57.451s 00:25:56.776 sys 0m13.043s 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:56.776 ************************************ 00:25:56.776 END TEST nvmf_fuzz 00:25:56.776 ************************************ 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:56.776 ************************************ 00:25:56.776 START TEST nvmf_multiconnection 00:25:56.776 ************************************ 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:56.776 * Looking for test storage... 00:25:56.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:56.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.776 --rc genhtml_branch_coverage=1 00:25:56.776 --rc genhtml_function_coverage=1 00:25:56.776 --rc genhtml_legend=1 00:25:56.776 --rc geninfo_all_blocks=1 00:25:56.776 --rc geninfo_unexecuted_blocks=1 00:25:56.776 00:25:56.776 ' 00:25:56.776 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:56.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.776 --rc genhtml_branch_coverage=1 00:25:56.777 --rc genhtml_function_coverage=1 00:25:56.777 --rc genhtml_legend=1 00:25:56.777 --rc geninfo_all_blocks=1 00:25:56.777 --rc geninfo_unexecuted_blocks=1 00:25:56.777 00:25:56.777 ' 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:56.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.777 --rc genhtml_branch_coverage=1 00:25:56.777 --rc genhtml_function_coverage=1 00:25:56.777 --rc genhtml_legend=1 00:25:56.777 --rc geninfo_all_blocks=1 00:25:56.777 --rc geninfo_unexecuted_blocks=1 00:25:56.777 00:25:56.777 ' 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:56.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.777 --rc genhtml_branch_coverage=1 00:25:56.777 --rc genhtml_function_coverage=1 00:25:56.777 --rc genhtml_legend=1 00:25:56.777 --rc geninfo_all_blocks=1 00:25:56.777 --rc geninfo_unexecuted_blocks=1 00:25:56.777 00:25:56.777 ' 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:56.777 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:58.713 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:58.713 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:58.713 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:58.713 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:58.714 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:58.714 02:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:58.714 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:58.714 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:58.995 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:58.995 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:58.995 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:58.995 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:58.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:25:58.996 00:25:58.996 --- 10.0.0.2 ping statistics --- 00:25:58.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.996 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:58.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:25:58.996 00:25:58.996 --- 10.0.0.1 ping statistics --- 00:25:58.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.996 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=3025160 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 3025160 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 3025160 ']' 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:58.996 02:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.996 [2024-11-17 02:46:07.340467] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:25:58.996 [2024-11-17 02:46:07.340606] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.255 [2024-11-17 02:46:07.497839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:59.255 [2024-11-17 02:46:07.645067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.255 [2024-11-17 02:46:07.645175] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.255 [2024-11-17 02:46:07.645202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.255 [2024-11-17 02:46:07.645226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.255 [2024-11-17 02:46:07.645246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.255 [2024-11-17 02:46:07.648105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.255 [2024-11-17 02:46:07.648151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:59.255 [2024-11-17 02:46:07.648182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.255 [2024-11-17 02:46:07.648188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.190 [2024-11-17 02:46:08.307747] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.190 Malloc1 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.190 [2024-11-17 02:46:08.433275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.190 Malloc2 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:00.190 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.191 Malloc3 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.191 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.450 Malloc4 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.450 Malloc5 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.450 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.709 Malloc6 00:26:00.709 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.709 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:00.709 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.709 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.709 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.709 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:00.709 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.709 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.709 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.709 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:00.709 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.709 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.709 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.709 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.709 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:00.709 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.709 02:46:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.709 Malloc7 00:26:00.709 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.709 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:00.709 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.709 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.709 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.709 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:00.709 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.709 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.710 Malloc8 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.710 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.967 Malloc9 00:26:00.967 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.967 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:00.967 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.967 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.968 Malloc10 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.968 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:01.226 Malloc11 00:26:01.226 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.226 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:01.226 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.226 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:01.226 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.226 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:01.226 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.226 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:01.226 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.226 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:01.226 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.226 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:01.226 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.226 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:01.226 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:01.226 02:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:01.792 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:01.792 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:01.792 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:01.792 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:01.792 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:03.693 02:46:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:03.693 02:46:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:03.693 02:46:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:26:03.693 02:46:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:03.693 02:46:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:03.693 02:46:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:03.693 02:46:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.693 02:46:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:04.626 02:46:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:04.626 02:46:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:04.626 02:46:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:04.626 02:46:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:04.626 02:46:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:06.525 02:46:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:06.525 02:46:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:06.525 02:46:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:26:06.525 02:46:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:06.525 02:46:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:06.525 02:46:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:06.525 02:46:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.525 02:46:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:07.092 02:46:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:07.092 02:46:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:07.092 02:46:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:07.092 02:46:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:07.092 02:46:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:09.622 02:46:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:09.622 02:46:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:09.622 02:46:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:26:09.622 02:46:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:09.622 02:46:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:09.622 02:46:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:09.622 02:46:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.622 02:46:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:09.881 02:46:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:09.881 02:46:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:09.881 02:46:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:09.881 02:46:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:09.881 02:46:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:12.411 02:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:12.411 02:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:12.411 02:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:26:12.411 02:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:12.411 02:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:12.411 02:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:12.411 02:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.411 02:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:12.669 02:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:12.669 02:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:12.669 02:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:12.669 02:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:12.669 02:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:15.199 02:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:15.199 02:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:15.199 02:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:26:15.199 02:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:15.199 02:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:15.199 02:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:15.199 02:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.199 02:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:15.458 02:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:15.458 02:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:15.458 02:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:15.458 02:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:15.458 02:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:17.986 02:46:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:17.986 02:46:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:17.986 02:46:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:26:17.986 02:46:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:17.986 02:46:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:17.986 02:46:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:17.986 02:46:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.986 02:46:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:18.244 02:46:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:18.245 02:46:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:18.245 02:46:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:18.245 02:46:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:18.245 02:46:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:20.771 02:46:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:20.771 02:46:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:20.771 02:46:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:26:20.771 02:46:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:20.771 02:46:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:20.771 02:46:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:20.771 02:46:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:20.771 02:46:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:21.337 02:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:21.337 02:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:21.337 02:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:21.337 02:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:21.337 02:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:23.238 02:46:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:23.238 02:46:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:23.238 02:46:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:26:23.238 02:46:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:23.238 02:46:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:23.238 02:46:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:23.238 02:46:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.238 02:46:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:24.173 02:46:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:24.173 02:46:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:24.173 02:46:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:24.173 02:46:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:24.173 02:46:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:26.073 02:46:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:26.073 02:46:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:26.073 02:46:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:26:26.073 02:46:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:26.073 02:46:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:26.073 02:46:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:26.073 02:46:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:26.073 02:46:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:27.008 02:46:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:27.008 02:46:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:27.008 02:46:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:27.008 02:46:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:27.008 02:46:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:28.908 02:46:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:28.908 02:46:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:28.908 02:46:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:26:28.908 02:46:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:28.908 02:46:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:28.908 02:46:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:28.908 02:46:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.908 02:46:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:29.843 02:46:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:29.843 02:46:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:29.843 02:46:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:29.843 02:46:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:29.843 02:46:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:31.804 02:46:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:31.804 02:46:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:31.805 02:46:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:31.805 02:46:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:31.805 02:46:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:31.805 02:46:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:31.805 02:46:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:31.805 [global] 00:26:31.805 thread=1 00:26:31.805 invalidate=1 00:26:31.805 rw=read 00:26:31.805 time_based=1 00:26:31.805 runtime=10 00:26:31.805 ioengine=libaio 00:26:31.805 direct=1 00:26:31.805 bs=262144 00:26:31.805 iodepth=64 00:26:31.805 norandommap=1 00:26:31.805 numjobs=1 00:26:31.805 00:26:31.805 [job0] 00:26:31.805 filename=/dev/nvme0n1 00:26:31.805 [job1] 00:26:31.805 filename=/dev/nvme10n1 00:26:31.805 [job2] 00:26:31.805 filename=/dev/nvme1n1 00:26:31.805 [job3] 00:26:31.805 filename=/dev/nvme2n1 00:26:31.805 [job4] 00:26:31.805 filename=/dev/nvme3n1 00:26:31.805 [job5] 00:26:31.805 filename=/dev/nvme4n1 00:26:31.805 [job6] 00:26:31.805 filename=/dev/nvme5n1 00:26:31.805 [job7] 00:26:31.805 filename=/dev/nvme6n1 00:26:31.805 [job8] 00:26:31.805 filename=/dev/nvme7n1 00:26:31.805 [job9] 00:26:31.805 filename=/dev/nvme8n1 00:26:31.805 [job10] 00:26:31.805 filename=/dev/nvme9n1 00:26:31.805 Could not set queue depth (nvme0n1) 00:26:31.805 Could not set queue depth (nvme10n1) 00:26:31.805 Could not set queue depth (nvme1n1) 00:26:31.805 Could not set queue depth (nvme2n1) 00:26:31.805 Could not set queue depth (nvme3n1) 00:26:31.805 Could not set queue depth (nvme4n1) 00:26:31.805 Could not set queue depth (nvme5n1) 00:26:31.805 Could not set queue depth (nvme6n1) 00:26:31.805 Could not set queue depth (nvme7n1) 00:26:31.805 Could not set queue depth (nvme8n1) 00:26:31.805 Could not set queue depth (nvme9n1) 00:26:32.063 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.063 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.063 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.063 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.063 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.063 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.063 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.063 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.063 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.063 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.063 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.063 fio-3.35 00:26:32.063 Starting 11 threads 00:26:44.277 00:26:44.277 job0: (groupid=0, jobs=1): err= 0: pid=3029608: Sun Nov 17 02:46:50 2024 00:26:44.278 read: IOPS=363, BW=90.8MiB/s (95.2MB/s)(923MiB/10162msec) 00:26:44.278 slat (usec): min=8, max=499638, avg=2114.87, stdev=14760.92 00:26:44.278 clat (msec): min=2, max=1168, avg=173.93, stdev=238.51 00:26:44.278 lat (msec): min=2, max=1325, avg=176.04, stdev=241.24 00:26:44.278 clat percentiles (msec): 00:26:44.278 | 1.00th=[ 8], 5.00th=[ 16], 10.00th=[ 27], 20.00th=[ 33], 00:26:44.278 | 30.00th=[ 39], 40.00th=[ 43], 50.00th=[ 51], 60.00th=[ 93], 00:26:44.278 | 70.00th=[ 186], 80.00th=[ 239], 90.00th=[ 550], 95.00th=[ 768], 00:26:44.278 | 99.00th=[ 1099], 99.50th=[ 1099], 99.90th=[ 1167], 99.95th=[ 1167], 00:26:44.278 | 99.99th=[ 1167] 00:26:44.278 bw ( KiB/s): min= 8192, max=432480, per=12.80%, avg=92915.50, stdev=101410.24, samples=20 00:26:44.278 iops : min= 32, max= 1689, avg=362.90, stdev=396.08, samples=20 00:26:44.278 lat (msec) : 4=0.11%, 10=2.49%, 20=4.20%, 50=42.86%, 100=12.38% 00:26:44.278 lat (msec) : 250=18.40%, 500=8.59%, 750=5.53%, 1000=4.01%, 2000=1.44% 00:26:44.278 cpu : usr=0.19%, sys=1.00%, ctx=953, majf=0, minf=4097 00:26:44.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:44.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.278 issued rwts: total=3691,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.278 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.278 job1: (groupid=0, jobs=1): err= 0: pid=3029618: Sun Nov 17 02:46:50 2024 00:26:44.278 read: IOPS=153, BW=38.3MiB/s (40.1MB/s)(386MiB/10089msec) 00:26:44.278 slat (usec): min=8, max=622638, avg=5371.25, stdev=32752.18 00:26:44.278 clat (msec): min=25, max=1194, avg=412.32, stdev=261.78 00:26:44.278 lat (msec): min=26, max=1194, avg=417.69, stdev=266.40 00:26:44.278 clat percentiles (msec): 00:26:44.278 | 1.00th=[ 31], 5.00th=[ 51], 10.00th=[ 71], 20.00th=[ 127], 00:26:44.278 | 30.00th=[ 251], 40.00th=[ 334], 50.00th=[ 368], 60.00th=[ 439], 00:26:44.278 | 70.00th=[ 518], 80.00th=[ 659], 90.00th=[ 802], 95.00th=[ 902], 00:26:44.278 | 99.00th=[ 1003], 99.50th=[ 1003], 99.90th=[ 1200], 99.95th=[ 1200], 00:26:44.278 | 99.99th=[ 1200] 00:26:44.278 bw ( KiB/s): min= 3584, max=93696, per=5.22%, avg=37911.70, stdev=24585.07, samples=20 00:26:44.278 iops : min= 14, max= 366, avg=148.05, stdev=96.03, samples=20 00:26:44.278 lat (msec) : 50=4.79%, 100=9.71%, 250=15.15%, 500=37.73%, 750=18.77% 00:26:44.278 lat (msec) : 1000=13.66%, 2000=0.19% 00:26:44.278 cpu : usr=0.08%, sys=0.46%, ctx=349, majf=0, minf=4097 00:26:44.278 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:26:44.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.278 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.278 issued rwts: total=1545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.278 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.278 job2: (groupid=0, jobs=1): err= 0: pid=3029619: Sun Nov 17 02:46:50 2024 00:26:44.278 read: IOPS=332, BW=83.2MiB/s (87.2MB/s)(845MiB/10165msec) 00:26:44.278 slat (usec): min=8, max=585529, avg=2474.82, stdev=19257.22 00:26:44.278 clat (msec): min=2, max=1078, avg=189.78, stdev=217.17 00:26:44.278 lat (msec): min=2, max=1266, avg=192.25, stdev=220.67 00:26:44.278 clat percentiles (msec): 00:26:44.278 | 1.00th=[ 14], 5.00th=[ 21], 10.00th=[ 25], 20.00th=[ 62], 00:26:44.278 | 30.00th=[ 74], 40.00th=[ 82], 50.00th=[ 95], 60.00th=[ 138], 00:26:44.278 | 70.00th=[ 197], 80.00th=[ 259], 90.00th=[ 464], 95.00th=[ 751], 00:26:44.278 | 99.00th=[ 953], 99.50th=[ 969], 99.90th=[ 995], 99.95th=[ 1053], 00:26:44.278 | 99.99th=[ 1083] 00:26:44.278 bw ( KiB/s): min= 9728, max=246272, per=11.70%, avg=84935.65, stdev=72315.68, samples=20 00:26:44.278 iops : min= 38, max= 962, avg=331.75, stdev=282.50, samples=20 00:26:44.278 lat (msec) : 4=0.12%, 10=0.33%, 20=4.41%, 50=14.11%, 100=32.15% 00:26:44.278 lat (msec) : 250=27.95%, 500=11.77%, 750=3.76%, 1000=5.35%, 2000=0.06% 00:26:44.278 cpu : usr=0.24%, sys=1.15%, ctx=1016, majf=0, minf=4097 00:26:44.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:26:44.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.278 issued rwts: total=3381,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.278 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.278 job3: (groupid=0, jobs=1): err= 0: pid=3029620: Sun Nov 17 02:46:50 2024 00:26:44.278 read: IOPS=265, BW=66.4MiB/s (69.6MB/s)(675MiB/10166msec) 00:26:44.278 slat (usec): min=8, max=855384, avg=3187.02, stdev=27279.88 00:26:44.278 clat (msec): min=41, max=1069, avg=237.48, stdev=246.60 00:26:44.278 lat (msec): min=43, max=1660, avg=240.67, stdev=250.87 00:26:44.278 clat percentiles (msec): 00:26:44.278 | 1.00th=[ 50], 5.00th=[ 54], 10.00th=[ 57], 20.00th=[ 60], 00:26:44.278 | 30.00th=[ 63], 40.00th=[ 85], 50.00th=[ 138], 60.00th=[ 182], 00:26:44.278 | 70.00th=[ 253], 80.00th=[ 363], 90.00th=[ 701], 95.00th=[ 793], 00:26:44.278 | 99.00th=[ 961], 99.50th=[ 995], 99.90th=[ 1070], 99.95th=[ 1070], 00:26:44.278 | 99.99th=[ 1070] 00:26:44.278 bw ( KiB/s): min= 8192, max=274944, per=9.79%, avg=71093.37, stdev=72004.50, samples=19 00:26:44.278 iops : min= 32, max= 1074, avg=277.63, stdev=281.25, samples=19 00:26:44.278 lat (msec) : 50=1.78%, 100=40.87%, 250=27.03%, 500=15.66%, 750=7.26% 00:26:44.278 lat (msec) : 1000=7.29%, 2000=0.11% 00:26:44.278 cpu : usr=0.13%, sys=0.79%, ctx=543, majf=0, minf=4097 00:26:44.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:44.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.278 issued rwts: total=2701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.278 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.278 job4: (groupid=0, jobs=1): err= 0: pid=3029621: Sun Nov 17 02:46:50 2024 00:26:44.278 read: IOPS=239, BW=59.8MiB/s (62.7MB/s)(603MiB/10095msec) 00:26:44.278 slat (usec): min=8, max=442799, avg=3144.71, stdev=22676.26 00:26:44.278 clat (usec): min=1849, max=1411.1k, avg=264417.30, stdev=288300.84 00:26:44.278 lat (usec): min=1882, max=1411.2k, avg=267562.01, stdev=291837.95 00:26:44.278 clat percentiles (msec): 00:26:44.278 | 1.00th=[ 8], 5.00th=[ 11], 10.00th=[ 24], 20.00th=[ 59], 00:26:44.278 | 30.00th=[ 90], 40.00th=[ 118], 50.00th=[ 144], 60.00th=[ 192], 00:26:44.278 | 70.00th=[ 296], 80.00th=[ 405], 90.00th=[ 776], 95.00th=[ 919], 00:26:44.278 | 99.00th=[ 1167], 99.50th=[ 1234], 99.90th=[ 1368], 99.95th=[ 1368], 00:26:44.278 | 99.99th=[ 1418] 00:26:44.278 bw ( KiB/s): min= 8192, max=200704, per=8.28%, avg=60126.65, stdev=52822.49, samples=20 00:26:44.278 iops : min= 32, max= 784, avg=234.85, stdev=206.33, samples=20 00:26:44.278 lat (msec) : 2=0.04%, 4=0.08%, 10=2.53%, 20=5.68%, 50=9.08% 00:26:44.278 lat (msec) : 100=16.70%, 250=31.91%, 500=17.16%, 750=5.76%, 1000=8.25% 00:26:44.278 lat (msec) : 2000=2.82% 00:26:44.278 cpu : usr=0.14%, sys=0.65%, ctx=559, majf=0, minf=4097 00:26:44.278 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:44.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.278 issued rwts: total=2413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.278 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.278 job5: (groupid=0, jobs=1): err= 0: pid=3029622: Sun Nov 17 02:46:50 2024 00:26:44.278 read: IOPS=267, BW=66.8MiB/s (70.0MB/s)(679MiB/10166msec) 00:26:44.278 slat (usec): min=8, max=606694, avg=2565.15, stdev=21404.07 00:26:44.278 clat (usec): min=916, max=1274.7k, avg=236869.30, stdev=283430.89 00:26:44.278 lat (usec): min=937, max=1414.5k, avg=239434.45, stdev=286915.46 00:26:44.278 clat percentiles (usec): 00:26:44.278 | 1.00th=[ 1319], 5.00th=[ 1762], 10.00th=[ 4883], 00:26:44.278 | 20.00th=[ 16712], 30.00th=[ 28967], 40.00th=[ 48497], 00:26:44.278 | 50.00th=[ 74974], 60.00th=[ 221250], 70.00th=[ 295699], 00:26:44.278 | 80.00th=[ 450888], 90.00th=[ 700449], 95.00th=[ 817890], 00:26:44.278 | 99.00th=[1115685], 99.50th=[1132463], 99.90th=[1182794], 00:26:44.278 | 99.95th=[1182794], 99.99th=[1266680] 00:26:44.278 bw ( KiB/s): min= 9216, max=267776, per=9.35%, avg=67861.30, stdev=74517.53, samples=20 00:26:44.278 iops : min= 36, max= 1046, avg=265.05, stdev=291.10, samples=20 00:26:44.278 lat (usec) : 1000=0.07% 00:26:44.278 lat (msec) : 2=6.04%, 4=3.50%, 10=7.22%, 20=4.94%, 50=19.52% 00:26:44.278 lat (msec) : 100=9.98%, 250=11.97%, 500=17.94%, 750=12.08%, 1000=4.24% 00:26:44.278 lat (msec) : 2000=2.50% 00:26:44.278 cpu : usr=0.10%, sys=0.77%, ctx=1298, majf=0, minf=3721 00:26:44.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:44.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.278 issued rwts: total=2715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.278 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.278 job6: (groupid=0, jobs=1): err= 0: pid=3029623: Sun Nov 17 02:46:50 2024 00:26:44.278 read: IOPS=174, BW=43.7MiB/s (45.8MB/s)(445MiB/10169msec) 00:26:44.278 slat (usec): min=8, max=495067, avg=3888.57, stdev=25528.97 00:26:44.279 clat (msec): min=41, max=1305, avg=361.90, stdev=264.80 00:26:44.279 lat (msec): min=41, max=1344, avg=365.79, stdev=269.68 00:26:44.279 clat percentiles (msec): 00:26:44.279 | 1.00th=[ 70], 5.00th=[ 110], 10.00th=[ 130], 20.00th=[ 148], 00:26:44.279 | 30.00th=[ 182], 40.00th=[ 211], 50.00th=[ 284], 60.00th=[ 326], 00:26:44.279 | 70.00th=[ 418], 80.00th=[ 550], 90.00th=[ 844], 95.00th=[ 919], 00:26:44.279 | 99.00th=[ 1167], 99.50th=[ 1217], 99.90th=[ 1301], 99.95th=[ 1301], 00:26:44.279 | 99.99th=[ 1301] 00:26:44.279 bw ( KiB/s): min= 5120, max=107520, per=6.04%, avg=43868.00, stdev=30533.73, samples=20 00:26:44.279 iops : min= 20, max= 420, avg=171.35, stdev=119.25, samples=20 00:26:44.279 lat (msec) : 50=0.45%, 100=3.26%, 250=43.19%, 500=29.42%, 750=11.92% 00:26:44.279 lat (msec) : 1000=9.39%, 2000=2.36% 00:26:44.279 cpu : usr=0.05%, sys=0.47%, ctx=288, majf=0, minf=4097 00:26:44.279 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:26:44.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.279 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.279 issued rwts: total=1778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.279 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.279 job7: (groupid=0, jobs=1): err= 0: pid=3029624: Sun Nov 17 02:46:50 2024 00:26:44.279 read: IOPS=181, BW=45.3MiB/s (47.5MB/s)(461MiB/10170msec) 00:26:44.279 slat (usec): min=8, max=737651, avg=3991.97, stdev=33924.36 00:26:44.279 clat (msec): min=3, max=1065, avg=349.08, stdev=261.13 00:26:44.279 lat (msec): min=3, max=1500, avg=353.07, stdev=264.13 00:26:44.279 clat percentiles (msec): 00:26:44.279 | 1.00th=[ 20], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 120], 00:26:44.279 | 30.00th=[ 220], 40.00th=[ 271], 50.00th=[ 305], 60.00th=[ 338], 00:26:44.279 | 70.00th=[ 368], 80.00th=[ 659], 90.00th=[ 810], 95.00th=[ 869], 00:26:44.279 | 99.00th=[ 1003], 99.50th=[ 1062], 99.90th=[ 1062], 99.95th=[ 1062], 00:26:44.279 | 99.99th=[ 1062] 00:26:44.279 bw ( KiB/s): min=13824, max=113664, per=6.60%, avg=47907.95, stdev=25088.63, samples=19 00:26:44.279 iops : min= 54, max= 444, avg=187.11, stdev=98.01, samples=19 00:26:44.279 lat (msec) : 4=0.05%, 10=0.43%, 20=0.54%, 50=12.43%, 100=5.54% 00:26:44.279 lat (msec) : 250=17.37%, 500=43.05%, 750=6.13%, 1000=13.41%, 2000=1.03% 00:26:44.279 cpu : usr=0.06%, sys=0.51%, ctx=310, majf=0, minf=4097 00:26:44.279 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:26:44.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.279 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.279 issued rwts: total=1842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.279 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.279 job8: (groupid=0, jobs=1): err= 0: pid=3029627: Sun Nov 17 02:46:50 2024 00:26:44.279 read: IOPS=636, BW=159MiB/s (167MB/s)(1619MiB/10168msec) 00:26:44.279 slat (usec): min=8, max=756334, avg=1323.39, stdev=15525.65 00:26:44.279 clat (msec): min=8, max=1660, avg=99.10, stdev=172.37 00:26:44.279 lat (msec): min=8, max=1660, avg=100.42, stdev=174.63 00:26:44.279 clat percentiles (msec): 00:26:44.279 | 1.00th=[ 26], 5.00th=[ 31], 10.00th=[ 34], 20.00th=[ 37], 00:26:44.279 | 30.00th=[ 39], 40.00th=[ 42], 50.00th=[ 47], 60.00th=[ 52], 00:26:44.279 | 70.00th=[ 75], 80.00th=[ 86], 90.00th=[ 161], 95.00th=[ 330], 00:26:44.279 | 99.00th=[ 1003], 99.50th=[ 1217], 99.90th=[ 1250], 99.95th=[ 1250], 00:26:44.279 | 99.99th=[ 1653] 00:26:44.279 bw ( KiB/s): min= 5120, max=447488, per=23.79%, avg=172734.63, stdev=145207.60, samples=19 00:26:44.279 iops : min= 20, max= 1748, avg=674.74, stdev=567.21, samples=19 00:26:44.279 lat (msec) : 10=0.05%, 20=0.48%, 50=57.81%, 100=22.98%, 250=11.81% 00:26:44.279 lat (msec) : 500=3.71%, 750=0.59%, 1000=1.54%, 2000=1.03% 00:26:44.279 cpu : usr=0.12%, sys=1.49%, ctx=917, majf=0, minf=4097 00:26:44.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:26:44.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.279 issued rwts: total=6475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.279 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.279 job9: (groupid=0, jobs=1): err= 0: pid=3029628: Sun Nov 17 02:46:50 2024 00:26:44.279 read: IOPS=108, BW=27.2MiB/s (28.5MB/s)(276MiB/10163msec) 00:26:44.279 slat (usec): min=9, max=545069, avg=7089.80, stdev=43485.45 00:26:44.279 clat (msec): min=75, max=1428, avg=581.57, stdev=364.30 00:26:44.279 lat (msec): min=75, max=1428, avg=588.66, stdev=369.63 00:26:44.279 clat percentiles (msec): 00:26:44.279 | 1.00th=[ 80], 5.00th=[ 126], 10.00th=[ 146], 20.00th=[ 188], 00:26:44.279 | 30.00th=[ 271], 40.00th=[ 409], 50.00th=[ 542], 60.00th=[ 676], 00:26:44.279 | 70.00th=[ 802], 80.00th=[ 902], 90.00th=[ 1150], 95.00th=[ 1200], 00:26:44.279 | 99.00th=[ 1334], 99.50th=[ 1401], 99.90th=[ 1435], 99.95th=[ 1435], 00:26:44.279 | 99.99th=[ 1435] 00:26:44.279 bw ( KiB/s): min= 8704, max=87040, per=3.67%, avg=26620.75, stdev=20851.61, samples=20 00:26:44.279 iops : min= 34, max= 340, avg=103.95, stdev=81.44, samples=20 00:26:44.279 lat (msec) : 100=2.45%, 250=25.54%, 500=20.29%, 750=16.49%, 1000=18.30% 00:26:44.279 lat (msec) : 2000=16.94% 00:26:44.279 cpu : usr=0.02%, sys=0.40%, ctx=96, majf=0, minf=4097 00:26:44.279 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:26:44.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.279 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.279 issued rwts: total=1104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.279 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.279 job10: (groupid=0, jobs=1): err= 0: pid=3029629: Sun Nov 17 02:46:50 2024 00:26:44.279 read: IOPS=118, BW=29.7MiB/s (31.1MB/s)(300MiB/10092msec) 00:26:44.279 slat (usec): min=12, max=564040, avg=8352.88, stdev=48095.82 00:26:44.279 clat (msec): min=55, max=1990, avg=530.37, stdev=387.23 00:26:44.279 lat (msec): min=55, max=1990, avg=538.73, stdev=393.53 00:26:44.279 clat percentiles (msec): 00:26:44.279 | 1.00th=[ 82], 5.00th=[ 116], 10.00th=[ 133], 20.00th=[ 161], 00:26:44.279 | 30.00th=[ 220], 40.00th=[ 284], 50.00th=[ 460], 60.00th=[ 642], 00:26:44.279 | 70.00th=[ 735], 80.00th=[ 885], 90.00th=[ 978], 95.00th=[ 1150], 00:26:44.279 | 99.00th=[ 1804], 99.50th=[ 1804], 99.90th=[ 1989], 99.95th=[ 1989], 00:26:44.279 | 99.99th=[ 1989] 00:26:44.279 bw ( KiB/s): min= 1536, max=103424, per=4.00%, avg=29026.70, stdev=26958.17, samples=20 00:26:44.279 iops : min= 6, max= 404, avg=113.35, stdev=105.29, samples=20 00:26:44.279 lat (msec) : 100=1.59%, 250=35.31%, 500=14.61%, 750=21.45%, 1000=17.53% 00:26:44.279 lat (msec) : 2000=9.52% 00:26:44.279 cpu : usr=0.11%, sys=0.38%, ctx=90, majf=0, minf=4097 00:26:44.279 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.7% 00:26:44.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.279 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.279 issued rwts: total=1198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.279 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.279 00:26:44.279 Run status group 0 (all jobs): 00:26:44.279 READ: bw=709MiB/s (743MB/s), 27.2MiB/s-159MiB/s (28.5MB/s-167MB/s), io=7211MiB (7561MB), run=10089-10170msec 00:26:44.279 00:26:44.279 Disk stats (read/write): 00:26:44.279 nvme0n1: ios=7255/0, merge=0/0, ticks=1220022/0, in_queue=1220022, util=97.25% 00:26:44.279 nvme10n1: ios=2893/0, merge=0/0, ticks=1238265/0, in_queue=1238265, util=97.47% 00:26:44.279 nvme1n1: ios=6609/0, merge=0/0, ticks=1219681/0, in_queue=1219681, util=97.75% 00:26:44.279 nvme2n1: ios=5261/0, merge=0/0, ticks=1228951/0, in_queue=1228951, util=97.87% 00:26:44.279 nvme3n1: ios=4621/0, merge=0/0, ticks=1199864/0, in_queue=1199864, util=97.95% 00:26:44.279 nvme4n1: ios=5271/0, merge=0/0, ticks=1220912/0, in_queue=1220912, util=98.27% 00:26:44.280 nvme5n1: ios=3394/0, merge=0/0, ticks=1226666/0, in_queue=1226666, util=98.41% 00:26:44.280 nvme6n1: ios=3540/0, merge=0/0, ticks=1218216/0, in_queue=1218216, util=98.51% 00:26:44.280 nvme7n1: ios=12815/0, merge=0/0, ticks=1224352/0, in_queue=1224352, util=98.92% 00:26:44.280 nvme8n1: ios=2081/0, merge=0/0, ticks=1235251/0, in_queue=1235251, util=99.10% 00:26:44.280 nvme9n1: ios=2182/0, merge=0/0, ticks=1230654/0, in_queue=1230654, util=99.24% 00:26:44.280 02:46:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:44.280 [global] 00:26:44.280 thread=1 00:26:44.280 invalidate=1 00:26:44.280 rw=randwrite 00:26:44.280 time_based=1 00:26:44.280 runtime=10 00:26:44.280 ioengine=libaio 00:26:44.280 direct=1 00:26:44.280 bs=262144 00:26:44.280 iodepth=64 00:26:44.280 norandommap=1 00:26:44.280 numjobs=1 00:26:44.280 00:26:44.280 [job0] 00:26:44.280 filename=/dev/nvme0n1 00:26:44.280 [job1] 00:26:44.280 filename=/dev/nvme10n1 00:26:44.280 [job2] 00:26:44.280 filename=/dev/nvme1n1 00:26:44.280 [job3] 00:26:44.280 filename=/dev/nvme2n1 00:26:44.280 [job4] 00:26:44.280 filename=/dev/nvme3n1 00:26:44.280 [job5] 00:26:44.280 filename=/dev/nvme4n1 00:26:44.280 [job6] 00:26:44.280 filename=/dev/nvme5n1 00:26:44.280 [job7] 00:26:44.280 filename=/dev/nvme6n1 00:26:44.280 [job8] 00:26:44.280 filename=/dev/nvme7n1 00:26:44.280 [job9] 00:26:44.280 filename=/dev/nvme8n1 00:26:44.280 [job10] 00:26:44.280 filename=/dev/nvme9n1 00:26:44.280 Could not set queue depth (nvme0n1) 00:26:44.280 Could not set queue depth (nvme10n1) 00:26:44.280 Could not set queue depth (nvme1n1) 00:26:44.280 Could not set queue depth (nvme2n1) 00:26:44.280 Could not set queue depth (nvme3n1) 00:26:44.280 Could not set queue depth (nvme4n1) 00:26:44.280 Could not set queue depth (nvme5n1) 00:26:44.280 Could not set queue depth (nvme6n1) 00:26:44.280 Could not set queue depth (nvme7n1) 00:26:44.280 Could not set queue depth (nvme8n1) 00:26:44.280 Could not set queue depth (nvme9n1) 00:26:44.280 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:44.280 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:44.280 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:44.280 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:44.280 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:44.280 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:44.280 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:44.280 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:44.280 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:44.280 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:44.280 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:44.280 fio-3.35 00:26:44.280 Starting 11 threads 00:26:54.255 00:26:54.255 job0: (groupid=0, jobs=1): err= 0: pid=3030358: Sun Nov 17 02:47:01 2024 00:26:54.255 write: IOPS=161, BW=40.4MiB/s (42.3MB/s)(416MiB/10307msec); 0 zone resets 00:26:54.255 slat (usec): min=26, max=180229, avg=4684.17, stdev=12127.40 00:26:54.255 clat (msec): min=25, max=954, avg=391.16, stdev=157.17 00:26:54.255 lat (msec): min=29, max=988, avg=395.84, stdev=159.49 00:26:54.255 clat percentiles (msec): 00:26:54.255 | 1.00th=[ 64], 5.00th=[ 155], 10.00th=[ 176], 20.00th=[ 247], 00:26:54.255 | 30.00th=[ 296], 40.00th=[ 351], 50.00th=[ 397], 60.00th=[ 426], 00:26:54.255 | 70.00th=[ 485], 80.00th=[ 535], 90.00th=[ 584], 95.00th=[ 625], 00:26:54.255 | 99.00th=[ 810], 99.50th=[ 885], 99.90th=[ 953], 99.95th=[ 953], 00:26:54.255 | 99.99th=[ 953] 00:26:54.255 bw ( KiB/s): min=24576, max=66427, per=5.01%, avg=40985.10, stdev=12045.51, samples=20 00:26:54.255 iops : min= 96, max= 259, avg=160.05, stdev=46.96, samples=20 00:26:54.255 lat (msec) : 50=0.66%, 100=1.86%, 250=17.90%, 500=51.71%, 750=26.43% 00:26:54.255 lat (msec) : 1000=1.44% 00:26:54.255 cpu : usr=0.52%, sys=0.76%, ctx=811, majf=0, minf=1 00:26:54.255 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:26:54.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:54.255 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:54.255 issued rwts: total=0,1665,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:54.255 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:54.255 job1: (groupid=0, jobs=1): err= 0: pid=3030369: Sun Nov 17 02:47:01 2024 00:26:54.255 write: IOPS=368, BW=92.0MiB/s (96.5MB/s)(930MiB/10102msec); 0 zone resets 00:26:54.255 slat (usec): min=12, max=32800, avg=2147.34, stdev=5141.69 00:26:54.255 clat (usec): min=1124, max=472179, avg=171653.31, stdev=93151.47 00:26:54.255 lat (usec): min=1183, max=472286, avg=173800.65, stdev=94206.51 00:26:54.255 clat percentiles (msec): 00:26:54.255 | 1.00th=[ 5], 5.00th=[ 21], 10.00th=[ 31], 20.00th=[ 104], 00:26:54.255 | 30.00th=[ 142], 40.00th=[ 150], 50.00th=[ 163], 60.00th=[ 186], 00:26:54.255 | 70.00th=[ 213], 80.00th=[ 245], 90.00th=[ 305], 95.00th=[ 334], 00:26:54.255 | 99.00th=[ 409], 99.50th=[ 426], 99.90th=[ 451], 99.95th=[ 472], 00:26:54.255 | 99.99th=[ 472] 00:26:54.255 bw ( KiB/s): min=41472, max=212480, per=11.44%, avg=93593.60, stdev=41340.41, samples=20 00:26:54.255 iops : min= 162, max= 830, avg=365.60, stdev=161.49, samples=20 00:26:54.255 lat (msec) : 2=0.22%, 4=0.56%, 10=1.16%, 20=2.72%, 50=11.05% 00:26:54.255 lat (msec) : 100=3.60%, 250=62.73%, 500=17.96% 00:26:54.255 cpu : usr=0.96%, sys=1.12%, ctx=1745, majf=0, minf=1 00:26:54.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:54.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:54.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:54.256 issued rwts: total=0,3719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:54.256 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:54.256 job2: (groupid=0, jobs=1): err= 0: pid=3030371: Sun Nov 17 02:47:01 2024 00:26:54.256 write: IOPS=285, BW=71.3MiB/s (74.7MB/s)(734MiB/10295msec); 0 zone resets 00:26:54.256 slat (usec): min=17, max=225137, avg=2030.44, stdev=7042.73 00:26:54.256 clat (usec): min=1264, max=772098, avg=222280.13, stdev=142307.20 00:26:54.256 lat (usec): min=1300, max=772164, avg=224310.57, stdev=143261.54 00:26:54.256 clat percentiles (msec): 00:26:54.256 | 1.00th=[ 6], 5.00th=[ 29], 10.00th=[ 56], 20.00th=[ 121], 00:26:54.256 | 30.00th=[ 148], 40.00th=[ 171], 50.00th=[ 201], 60.00th=[ 228], 00:26:54.256 | 70.00th=[ 253], 80.00th=[ 317], 90.00th=[ 422], 95.00th=[ 502], 00:26:54.256 | 99.00th=[ 718], 99.50th=[ 760], 99.90th=[ 768], 99.95th=[ 776], 00:26:54.256 | 99.99th=[ 776] 00:26:54.256 bw ( KiB/s): min=29696, max=163328, per=8.99%, avg=73514.70, stdev=36017.04, samples=20 00:26:54.256 iops : min= 116, max= 638, avg=287.15, stdev=140.69, samples=20 00:26:54.256 lat (msec) : 2=0.03%, 4=0.37%, 10=1.67%, 20=1.84%, 50=3.85% 00:26:54.256 lat (msec) : 100=8.93%, 250=52.71%, 500=25.59%, 750=4.40%, 1000=0.61% 00:26:54.256 cpu : usr=0.78%, sys=1.07%, ctx=1623, majf=0, minf=1 00:26:54.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:54.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:54.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:54.256 issued rwts: total=0,2935,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:54.256 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:54.256 job3: (groupid=0, jobs=1): err= 0: pid=3030372: Sun Nov 17 02:47:01 2024 00:26:54.256 write: IOPS=281, BW=70.3MiB/s (73.8MB/s)(715MiB/10169msec); 0 zone resets 00:26:54.256 slat (usec): min=16, max=151993, avg=2632.31, stdev=8950.86 00:26:54.256 clat (msec): min=2, max=619, avg=224.71, stdev=142.36 00:26:54.256 lat (msec): min=2, max=619, avg=227.34, stdev=144.19 00:26:54.256 clat percentiles (msec): 00:26:54.256 | 1.00th=[ 12], 5.00th=[ 45], 10.00th=[ 69], 20.00th=[ 107], 00:26:54.256 | 30.00th=[ 125], 40.00th=[ 148], 50.00th=[ 167], 60.00th=[ 228], 00:26:54.256 | 70.00th=[ 296], 80.00th=[ 380], 90.00th=[ 456], 95.00th=[ 481], 00:26:54.256 | 99.00th=[ 518], 99.50th=[ 600], 99.90th=[ 617], 99.95th=[ 617], 00:26:54.256 | 99.99th=[ 617] 00:26:54.256 bw ( KiB/s): min=34816, max=139264, per=8.75%, avg=71622.40, stdev=36100.69, samples=20 00:26:54.256 iops : min= 136, max= 544, avg=279.75, stdev=141.02, samples=20 00:26:54.256 lat (msec) : 4=0.10%, 10=0.73%, 20=0.45%, 50=4.93%, 100=12.27% 00:26:54.256 lat (msec) : 250=44.18%, 500=35.34%, 750=1.99% 00:26:54.256 cpu : usr=0.79%, sys=1.00%, ctx=1540, majf=0, minf=1 00:26:54.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:54.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:54.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:54.256 issued rwts: total=0,2861,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:54.256 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:54.256 job4: (groupid=0, jobs=1): err= 0: pid=3030373: Sun Nov 17 02:47:01 2024 00:26:54.256 write: IOPS=240, BW=60.2MiB/s (63.1MB/s)(614MiB/10198msec); 0 zone resets 00:26:54.256 slat (usec): min=26, max=59860, avg=3507.07, stdev=8318.46 00:26:54.256 clat (msec): min=3, max=870, avg=262.16, stdev=161.47 00:26:54.256 lat (msec): min=3, max=870, avg=265.66, stdev=163.72 00:26:54.256 clat percentiles (msec): 00:26:54.256 | 1.00th=[ 19], 5.00th=[ 69], 10.00th=[ 82], 20.00th=[ 126], 00:26:54.256 | 30.00th=[ 163], 40.00th=[ 197], 50.00th=[ 226], 60.00th=[ 251], 00:26:54.256 | 70.00th=[ 296], 80.00th=[ 405], 90.00th=[ 550], 95.00th=[ 592], 00:26:54.256 | 99.00th=[ 651], 99.50th=[ 659], 99.90th=[ 835], 99.95th=[ 835], 00:26:54.256 | 99.99th=[ 869] 00:26:54.256 bw ( KiB/s): min=24576, max=130048, per=7.48%, avg=61228.85, stdev=30107.28, samples=20 00:26:54.256 iops : min= 96, max= 508, avg=239.15, stdev=117.60, samples=20 00:26:54.256 lat (msec) : 4=0.04%, 10=0.41%, 20=0.65%, 50=2.00%, 100=9.86% 00:26:54.256 lat (msec) : 250=47.21%, 500=27.54%, 750=11.89%, 1000=0.41% 00:26:54.256 cpu : usr=0.79%, sys=0.96%, ctx=933, majf=0, minf=1 00:26:54.256 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:54.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:54.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:54.256 issued rwts: total=0,2455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:54.256 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:54.256 job5: (groupid=0, jobs=1): err= 0: pid=3030374: Sun Nov 17 02:47:01 2024 00:26:54.256 write: IOPS=213, BW=53.3MiB/s (55.9MB/s)(546MiB/10242msec); 0 zone resets 00:26:54.256 slat (usec): min=19, max=93816, avg=3193.02, stdev=9405.72 00:26:54.256 clat (usec): min=911, max=905469, avg=296881.39, stdev=200041.69 00:26:54.256 lat (usec): min=976, max=905497, avg=300074.41, stdev=202447.50 00:26:54.256 clat percentiles (usec): 00:26:54.256 | 1.00th=[ 1778], 5.00th=[ 6128], 10.00th=[ 17957], 20.00th=[ 50594], 00:26:54.256 | 30.00th=[122160], 40.00th=[287310], 50.00th=[341836], 60.00th=[371196], 00:26:54.256 | 70.00th=[400557], 80.00th=[455082], 90.00th=[557843], 95.00th=[608175], 00:26:54.256 | 99.00th=[734004], 99.50th=[801113], 99.90th=[868221], 99.95th=[901776], 00:26:54.256 | 99.99th=[901776] 00:26:54.256 bw ( KiB/s): min=24576, max=164352, per=6.63%, avg=54266.70, stdev=31236.48, samples=20 00:26:54.256 iops : min= 96, max= 642, avg=211.95, stdev=122.02, samples=20 00:26:54.256 lat (usec) : 1000=0.14% 00:26:54.256 lat (msec) : 2=1.10%, 4=1.24%, 10=4.03%, 20=3.94%, 50=9.25% 00:26:54.256 lat (msec) : 100=8.80%, 250=6.64%, 500=48.47%, 750=15.71%, 1000=0.69% 00:26:54.256 cpu : usr=0.72%, sys=0.74%, ctx=1338, majf=0, minf=1 00:26:54.256 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:54.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:54.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:54.256 issued rwts: total=0,2183,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:54.256 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:54.256 job6: (groupid=0, jobs=1): err= 0: pid=3030375: Sun Nov 17 02:47:01 2024 00:26:54.256 write: IOPS=254, BW=63.6MiB/s (66.7MB/s)(650MiB/10209msec); 0 zone resets 00:26:54.256 slat (usec): min=23, max=55297, avg=2567.58, stdev=7629.07 00:26:54.256 clat (usec): min=1070, max=958629, avg=248761.89, stdev=178811.12 00:26:54.256 lat (usec): min=1124, max=958672, avg=251329.46, stdev=180800.13 00:26:54.256 clat percentiles (msec): 00:26:54.256 | 1.00th=[ 4], 5.00th=[ 33], 10.00th=[ 52], 20.00th=[ 71], 00:26:54.256 | 30.00th=[ 104], 40.00th=[ 140], 50.00th=[ 220], 60.00th=[ 296], 00:26:54.256 | 70.00th=[ 376], 80.00th=[ 405], 90.00th=[ 456], 95.00th=[ 575], 00:26:54.256 | 99.00th=[ 743], 99.50th=[ 852], 99.90th=[ 919], 99.95th=[ 961], 00:26:54.256 | 99.99th=[ 961] 00:26:54.256 bw ( KiB/s): min=29184, max=157696, per=7.93%, avg=64891.70, stdev=35845.80, samples=20 00:26:54.256 iops : min= 114, max= 616, avg=253.45, stdev=140.04, samples=20 00:26:54.256 lat (msec) : 2=0.46%, 4=0.62%, 10=0.08%, 20=1.42%, 50=7.12% 00:26:54.256 lat (msec) : 100=19.21%, 250=23.48%, 500=39.95%, 750=6.81%, 1000=0.85% 00:26:54.256 cpu : usr=0.69%, sys=1.09%, ctx=1603, majf=0, minf=1 00:26:54.256 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:54.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:54.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:54.256 issued rwts: total=0,2598,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:54.256 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:54.256 job7: (groupid=0, jobs=1): err= 0: pid=3030376: Sun Nov 17 02:47:01 2024 00:26:54.256 write: IOPS=333, BW=83.3MiB/s (87.3MB/s)(859MiB/10310msec); 0 zone resets 00:26:54.256 slat (usec): min=20, max=145118, avg=1767.46, stdev=6782.60 00:26:54.256 clat (usec): min=1120, max=994687, avg=190226.55, stdev=165022.28 00:26:54.256 lat (usec): min=1154, max=994740, avg=191994.01, stdev=166580.10 00:26:54.256 clat percentiles (msec): 00:26:54.256 | 1.00th=[ 5], 5.00th=[ 32], 10.00th=[ 56], 20.00th=[ 69], 00:26:54.256 | 30.00th=[ 73], 40.00th=[ 106], 50.00th=[ 131], 60.00th=[ 161], 00:26:54.256 | 70.00th=[ 234], 80.00th=[ 300], 90.00th=[ 435], 95.00th=[ 550], 00:26:54.256 | 99.00th=[ 726], 99.50th=[ 818], 99.90th=[ 961], 99.95th=[ 961], 00:26:54.256 | 99.99th=[ 995] 00:26:54.256 bw ( KiB/s): min=22016, max=230400, per=10.54%, avg=86263.65, stdev=50720.88, samples=20 00:26:54.256 iops : min= 86, max= 900, avg=336.95, stdev=198.13, samples=20 00:26:54.256 lat (msec) : 2=0.26%, 4=0.55%, 10=1.46%, 20=0.84%, 50=5.77% 00:26:54.256 lat (msec) : 100=29.99%, 250=36.40%, 500=18.17%, 750=5.82%, 1000=0.73% 00:26:54.256 cpu : usr=1.06%, sys=1.04%, ctx=1971, majf=0, minf=1 00:26:54.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:54.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:54.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:54.256 issued rwts: total=0,3434,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:54.256 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:54.256 job8: (groupid=0, jobs=1): err= 0: pid=3030377: Sun Nov 17 02:47:01 2024 00:26:54.256 write: IOPS=432, BW=108MiB/s (113MB/s)(1088MiB/10062msec); 0 zone resets 00:26:54.256 slat (usec): min=24, max=137461, avg=2056.90, stdev=5268.64 00:26:54.256 clat (usec): min=1800, max=508496, avg=145854.77, stdev=87552.54 00:26:54.256 lat (usec): min=1839, max=508563, avg=147911.67, stdev=88727.01 00:26:54.256 clat percentiles (msec): 00:26:54.256 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 33], 20.00th=[ 57], 00:26:54.256 | 30.00th=[ 75], 40.00th=[ 134], 50.00th=[ 150], 60.00th=[ 163], 00:26:54.256 | 70.00th=[ 190], 80.00th=[ 222], 90.00th=[ 257], 95.00th=[ 305], 00:26:54.256 | 99.00th=[ 351], 99.50th=[ 384], 99.90th=[ 414], 99.95th=[ 414], 00:26:54.256 | 99.99th=[ 510] 00:26:54.256 bw ( KiB/s): min=38912, max=242688, per=13.41%, avg=109738.80, stdev=53525.78, samples=20 00:26:54.256 iops : min= 152, max= 948, avg=428.65, stdev=209.09, samples=20 00:26:54.256 lat (msec) : 2=0.07%, 4=0.34%, 10=2.14%, 20=3.84%, 50=8.09% 00:26:54.256 lat (msec) : 100=19.06%, 250=54.64%, 500=11.79%, 750=0.02% 00:26:54.256 cpu : usr=1.55%, sys=1.64%, ctx=1783, majf=0, minf=1 00:26:54.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:54.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:54.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:54.257 issued rwts: total=0,4350,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:54.257 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:54.257 job9: (groupid=0, jobs=1): err= 0: pid=3030378: Sun Nov 17 02:47:01 2024 00:26:54.257 write: IOPS=323, BW=80.8MiB/s (84.8MB/s)(828MiB/10237msec); 0 zone resets 00:26:54.257 slat (usec): min=15, max=157767, avg=2220.44, stdev=7780.66 00:26:54.257 clat (usec): min=1079, max=800073, avg=195574.43, stdev=178433.93 00:26:54.257 lat (usec): min=1112, max=800162, avg=197794.87, stdev=180699.74 00:26:54.257 clat percentiles (msec): 00:26:54.257 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 15], 20.00th=[ 37], 00:26:54.257 | 30.00th=[ 68], 40.00th=[ 86], 50.00th=[ 132], 60.00th=[ 161], 00:26:54.257 | 70.00th=[ 313], 80.00th=[ 393], 90.00th=[ 443], 95.00th=[ 527], 00:26:54.257 | 99.00th=[ 651], 99.50th=[ 776], 99.90th=[ 793], 99.95th=[ 802], 00:26:54.257 | 99.99th=[ 802] 00:26:54.257 bw ( KiB/s): min=24064, max=201728, per=10.16%, avg=83118.90, stdev=60873.62, samples=20 00:26:54.257 iops : min= 94, max= 788, avg=324.65, stdev=237.81, samples=20 00:26:54.257 lat (msec) : 2=0.27%, 4=0.54%, 10=5.71%, 20=5.74%, 50=12.63% 00:26:54.257 lat (msec) : 100=18.73%, 250=23.02%, 500=27.95%, 750=4.56%, 1000=0.85% 00:26:54.257 cpu : usr=0.98%, sys=1.03%, ctx=2025, majf=0, minf=2 00:26:54.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:54.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:54.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:54.257 issued rwts: total=0,3310,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:54.257 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:54.257 job10: (groupid=0, jobs=1): err= 0: pid=3030379: Sun Nov 17 02:47:01 2024 00:26:54.257 write: IOPS=333, BW=83.4MiB/s (87.4MB/s)(859MiB/10308msec); 0 zone resets 00:26:54.257 slat (usec): min=22, max=67782, avg=2177.78, stdev=6046.80 00:26:54.257 clat (usec): min=1091, max=970870, avg=189598.67, stdev=153326.88 00:26:54.257 lat (usec): min=1165, max=970907, avg=191776.45, stdev=154863.25 00:26:54.257 clat percentiles (msec): 00:26:54.257 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 25], 20.00th=[ 57], 00:26:54.257 | 30.00th=[ 118], 40.00th=[ 148], 50.00th=[ 155], 60.00th=[ 178], 00:26:54.257 | 70.00th=[ 224], 80.00th=[ 279], 90.00th=[ 376], 95.00th=[ 514], 00:26:54.257 | 99.00th=[ 676], 99.50th=[ 827], 99.90th=[ 936], 99.95th=[ 969], 00:26:54.257 | 99.99th=[ 969] 00:26:54.257 bw ( KiB/s): min=23552, max=212480, per=10.56%, avg=86361.60, stdev=48987.96, samples=20 00:26:54.257 iops : min= 92, max= 830, avg=337.35, stdev=191.36, samples=20 00:26:54.257 lat (msec) : 2=0.73%, 4=0.49%, 10=3.64%, 20=3.96%, 50=7.51% 00:26:54.257 lat (msec) : 100=12.13%, 250=47.51%, 500=18.94%, 750=4.31%, 1000=0.79% 00:26:54.257 cpu : usr=0.89%, sys=1.41%, ctx=1790, majf=0, minf=1 00:26:54.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:54.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:54.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:54.257 issued rwts: total=0,3437,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:54.257 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:54.257 00:26:54.257 Run status group 0 (all jobs): 00:26:54.257 WRITE: bw=799MiB/s (838MB/s), 40.4MiB/s-108MiB/s (42.3MB/s-113MB/s), io=8237MiB (8637MB), run=10062-10310msec 00:26:54.257 00:26:54.257 Disk stats (read/write): 00:26:54.257 nvme0n1: ios=49/3246, merge=0/0, ticks=214/1217354, in_queue=1217568, util=99.03% 00:26:54.257 nvme10n1: ios=47/7269, merge=0/0, ticks=274/1213420, in_queue=1213694, util=98.18% 00:26:54.257 nvme1n1: ios=15/5799, merge=0/0, ticks=425/1221614, in_queue=1222039, util=98.38% 00:26:54.257 nvme2n1: ios=43/5549, merge=0/0, ticks=3907/1193704, in_queue=1197611, util=100.00% 00:26:54.257 nvme3n1: ios=0/4903, merge=0/0, ticks=0/1237327, in_queue=1237327, util=97.93% 00:26:54.257 nvme4n1: ios=15/4313, merge=0/0, ticks=105/1232692, in_queue=1232797, util=98.39% 00:26:54.257 nvme5n1: ios=0/5184, merge=0/0, ticks=0/1240822, in_queue=1240822, util=98.38% 00:26:54.257 nvme6n1: ios=0/6781, merge=0/0, ticks=0/1222772, in_queue=1222772, util=98.49% 00:26:54.257 nvme7n1: ios=47/8514, merge=0/0, ticks=257/1211892, in_queue=1212149, util=100.00% 00:26:54.257 nvme8n1: ios=0/6595, merge=0/0, ticks=0/1231156, in_queue=1231156, util=99.01% 00:26:54.257 nvme9n1: ios=0/6790, merge=0/0, ticks=0/1216785, in_queue=1216785, util=99.11% 00:26:54.257 02:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:54.257 02:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:54.257 02:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.257 02:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:54.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:54.257 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:54.257 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:54.257 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:54.257 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:54.257 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:54.257 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:54.257 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:54.257 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:54.257 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.257 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:54.257 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.257 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.257 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:54.516 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:54.516 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:54.516 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:54.516 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:54.516 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:54.516 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:54.516 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:54.516 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:54.516 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:54.516 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.516 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:54.516 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.516 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.516 02:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:54.774 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:54.774 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:54.774 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:54.774 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:54.774 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:54.774 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:54.774 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:54.774 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:54.774 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:54.774 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.774 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:54.774 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.774 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.774 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:55.340 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:55.340 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:55.340 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:55.340 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:55.340 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:55.340 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:55.340 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:55.340 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:55.340 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:55.340 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.340 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:55.340 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.340 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:55.340 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:55.598 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:55.598 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:55.598 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:55.599 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:55.599 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:55.599 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:55.599 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:55.599 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:55.599 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:55.599 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.599 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:55.599 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.599 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:55.599 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:55.856 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:55.857 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:55.857 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:55.857 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:55.857 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:55.857 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:55.857 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:55.857 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:55.857 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:55.857 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.857 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:55.857 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.857 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:55.857 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:56.115 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:56.115 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:56.115 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:56.115 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:56.115 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:56.115 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:56.115 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:56.115 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:56.115 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:56.115 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.115 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:56.115 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.115 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:56.115 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:56.374 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:56.374 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:56.374 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:56.374 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:56.374 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:56.374 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:56.374 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:56.374 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:56.374 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:56.374 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.374 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:56.374 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.374 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:56.374 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:56.633 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:56.633 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:56.633 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:56.633 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:56.633 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:56.633 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:56.633 02:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:56.633 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:56.633 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:56.633 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.633 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:56.633 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.633 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:56.633 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:56.892 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:56.892 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:56.892 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:56.892 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:56.892 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:56.892 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:56.892 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:56.892 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:56.892 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:56.892 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.892 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:56.892 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.892 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:56.892 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:57.151 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:57.151 rmmod nvme_tcp 00:26:57.151 rmmod nvme_fabrics 00:26:57.151 rmmod nvme_keyring 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 3025160 ']' 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 3025160 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 3025160 ']' 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 3025160 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3025160 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3025160' 00:26:57.151 killing process with pid 3025160 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 3025160 00:26:57.151 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 3025160 00:27:00.436 02:47:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:00.436 02:47:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:00.436 02:47:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:00.436 02:47:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:27:00.436 02:47:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:00.436 02:47:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:27:00.436 02:47:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:27:00.436 02:47:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:00.436 02:47:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:00.436 02:47:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.436 02:47:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:00.436 02:47:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:02.339 00:27:02.339 real 1m5.875s 00:27:02.339 user 3m50.475s 00:27:02.339 sys 0m16.314s 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.339 ************************************ 00:27:02.339 END TEST nvmf_multiconnection 00:27:02.339 ************************************ 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:02.339 ************************************ 00:27:02.339 START TEST nvmf_initiator_timeout 00:27:02.339 ************************************ 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:02.339 * Looking for test storage... 00:27:02.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:02.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.339 --rc genhtml_branch_coverage=1 00:27:02.339 --rc genhtml_function_coverage=1 00:27:02.339 --rc genhtml_legend=1 00:27:02.339 --rc geninfo_all_blocks=1 00:27:02.339 --rc geninfo_unexecuted_blocks=1 00:27:02.339 00:27:02.339 ' 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:02.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.339 --rc genhtml_branch_coverage=1 00:27:02.339 --rc genhtml_function_coverage=1 00:27:02.339 --rc genhtml_legend=1 00:27:02.339 --rc geninfo_all_blocks=1 00:27:02.339 --rc geninfo_unexecuted_blocks=1 00:27:02.339 00:27:02.339 ' 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:02.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.339 --rc genhtml_branch_coverage=1 00:27:02.339 --rc genhtml_function_coverage=1 00:27:02.339 --rc genhtml_legend=1 00:27:02.339 --rc geninfo_all_blocks=1 00:27:02.339 --rc geninfo_unexecuted_blocks=1 00:27:02.339 00:27:02.339 ' 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:02.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.339 --rc genhtml_branch_coverage=1 00:27:02.339 --rc genhtml_function_coverage=1 00:27:02.339 --rc genhtml_legend=1 00:27:02.339 --rc geninfo_all_blocks=1 00:27:02.339 --rc geninfo_unexecuted_blocks=1 00:27:02.339 00:27:02.339 ' 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:02.339 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:02.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:27:02.340 02:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:04.872 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:04.872 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.872 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:04.873 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:04.873 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:04.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:27:04.873 00:27:04.873 --- 10.0.0.2 ping statistics --- 00:27:04.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.873 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:04.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:27:04.873 00:27:04.873 --- 10.0.0.1 ping statistics --- 00:27:04.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.873 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=3033967 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 3033967 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 3033967 ']' 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:04.873 02:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.873 [2024-11-17 02:47:13.008385] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:04.873 [2024-11-17 02:47:13.008541] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:04.873 [2024-11-17 02:47:13.161596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:04.873 [2024-11-17 02:47:13.303749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:04.873 [2024-11-17 02:47:13.303848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:04.873 [2024-11-17 02:47:13.303875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:04.873 [2024-11-17 02:47:13.303899] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:04.873 [2024-11-17 02:47:13.303920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:04.873 [2024-11-17 02:47:13.306785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.873 [2024-11-17 02:47:13.306855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:04.873 [2024-11-17 02:47:13.306940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.873 [2024-11-17 02:47:13.306946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:05.808 02:47:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:05.808 02:47:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:27:05.808 02:47:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:05.808 02:47:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:05.808 02:47:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.808 02:47:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.808 02:47:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:05.808 02:47:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:05.808 02:47:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.808 02:47:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.809 Malloc0 00:27:05.809 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.809 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:05.809 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.809 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.809 Delay0 00:27:05.809 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.809 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:05.809 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.809 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.809 [2024-11-17 02:47:14.099711] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.809 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.809 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:05.809 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.809 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.809 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.809 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:05.809 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.809 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.809 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.809 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:05.809 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.809 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.809 [2024-11-17 02:47:14.129233] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:05.809 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.809 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:06.375 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:06.375 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:27:06.375 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:06.375 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:06.375 02:47:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:27:08.906 02:47:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:08.906 02:47:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:08.906 02:47:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:08.906 02:47:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:08.906 02:47:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:08.906 02:47:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:27:08.906 02:47:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3034401 00:27:08.906 02:47:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:08.906 02:47:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:08.906 [global] 00:27:08.906 thread=1 00:27:08.906 invalidate=1 00:27:08.906 rw=write 00:27:08.906 time_based=1 00:27:08.906 runtime=60 00:27:08.906 ioengine=libaio 00:27:08.906 direct=1 00:27:08.906 bs=4096 00:27:08.906 iodepth=1 00:27:08.906 norandommap=0 00:27:08.906 numjobs=1 00:27:08.906 00:27:08.906 verify_dump=1 00:27:08.906 verify_backlog=512 00:27:08.906 verify_state_save=0 00:27:08.906 do_verify=1 00:27:08.906 verify=crc32c-intel 00:27:08.906 [job0] 00:27:08.906 filename=/dev/nvme0n1 00:27:08.906 Could not set queue depth (nvme0n1) 00:27:08.906 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:08.906 fio-3.35 00:27:08.906 Starting 1 thread 00:27:11.433 02:47:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:11.433 02:47:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.433 02:47:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.433 true 00:27:11.433 02:47:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.433 02:47:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:11.433 02:47:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.433 02:47:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.433 true 00:27:11.433 02:47:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.433 02:47:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:11.433 02:47:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.433 02:47:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.433 true 00:27:11.433 02:47:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.433 02:47:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:11.433 02:47:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.433 02:47:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.433 true 00:27:11.433 02:47:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.433 02:47:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:14.716 02:47:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:14.716 02:47:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.716 02:47:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:14.716 true 00:27:14.716 02:47:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.716 02:47:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:14.716 02:47:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.716 02:47:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:14.716 true 00:27:14.716 02:47:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.716 02:47:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:14.716 02:47:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.716 02:47:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:14.716 true 00:27:14.716 02:47:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.716 02:47:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:14.716 02:47:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.716 02:47:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:14.716 true 00:27:14.716 02:47:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.716 02:47:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:14.716 02:47:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3034401 00:28:10.959 00:28:10.959 job0: (groupid=0, jobs=1): err= 0: pid=3034470: Sun Nov 17 02:48:17 2024 00:28:10.959 read: IOPS=8, BW=34.1KiB/s (35.0kB/s)(2048KiB/60001msec) 00:28:10.959 slat (usec): min=7, max=9844, avg=39.31, stdev=434.25 00:28:10.959 clat (usec): min=260, max=41283k, avg=116541.98, stdev=1822931.92 00:28:10.959 lat (usec): min=268, max=41283k, avg=116581.29, stdev=1822930.98 00:28:10.959 clat percentiles (usec): 00:28:10.959 | 1.00th=[ 289], 5.00th=[ 359], 10.00th=[ 445], 00:28:10.959 | 20.00th=[ 41157], 30.00th=[ 41157], 40.00th=[ 41157], 00:28:10.959 | 50.00th=[ 41157], 60.00th=[ 41157], 70.00th=[ 41157], 00:28:10.959 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 41157], 00:28:10.960 | 99.00th=[ 41157], 99.50th=[ 41681], 99.90th=[17112761], 00:28:10.960 | 99.95th=[17112761], 99.99th=[17112761] 00:28:10.960 write: IOPS=15, BW=63.5KiB/s (65.1kB/s)(3812KiB/60001msec); 0 zone resets 00:28:10.960 slat (usec): min=9, max=29343, avg=47.57, stdev=950.02 00:28:10.960 clat (usec): min=212, max=495, avg=270.60, stdev=42.18 00:28:10.960 lat (usec): min=224, max=29715, avg=318.17, stdev=954.36 00:28:10.960 clat percentiles (usec): 00:28:10.960 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 231], 00:28:10.960 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 265], 60.00th=[ 281], 00:28:10.960 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 338], 00:28:10.960 | 99.00th=[ 388], 99.50th=[ 420], 99.90th=[ 494], 99.95th=[ 494], 00:28:10.960 | 99.99th=[ 494] 00:28:10.960 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:28:10.960 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:28:10.960 lat (usec) : 250=28.12%, 500=40.82%, 750=0.20%, 1000=0.14% 00:28:10.960 lat (msec) : 2=0.07%, 50=30.58%, >=2000=0.07% 00:28:10.960 cpu : usr=0.03%, sys=0.06%, ctx=1468, majf=0, minf=1 00:28:10.960 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:10.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.960 issued rwts: total=512,953,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:10.960 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:10.960 00:28:10.960 Run status group 0 (all jobs): 00:28:10.960 READ: bw=34.1KiB/s (35.0kB/s), 34.1KiB/s-34.1KiB/s (35.0kB/s-35.0kB/s), io=2048KiB (2097kB), run=60001-60001msec 00:28:10.960 WRITE: bw=63.5KiB/s (65.1kB/s), 63.5KiB/s-63.5KiB/s (65.1kB/s-65.1kB/s), io=3812KiB (3903kB), run=60001-60001msec 00:28:10.960 00:28:10.960 Disk stats (read/write): 00:28:10.960 nvme0n1: ios=564/571, merge=0/0, ticks=18717/150, in_queue=18867, util=99.86% 00:28:10.960 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:10.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:10.961 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:10.961 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:28:10.961 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:10.961 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:10.961 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:10.961 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:10.961 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:28:10.961 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:10.961 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:10.961 nvmf hotplug test: fio successful as expected 00:28:10.961 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:10.961 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.961 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:10.961 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.961 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:10.961 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:10.962 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:10.962 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:10.962 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:10.962 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:10.962 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:10.962 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:10.962 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:10.962 rmmod nvme_tcp 00:28:10.962 rmmod nvme_fabrics 00:28:10.962 rmmod nvme_keyring 00:28:10.962 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:10.962 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:10.962 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:10.962 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 3033967 ']' 00:28:10.962 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 3033967 00:28:10.962 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 3033967 ']' 00:28:10.962 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 3033967 00:28:10.962 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:28:10.962 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:10.962 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3033967 00:28:10.962 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:10.962 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:10.962 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3033967' 00:28:10.962 killing process with pid 3033967 00:28:10.962 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 3033967 00:28:10.962 02:48:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 3033967 00:28:10.962 02:48:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:10.962 02:48:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:10.962 02:48:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:10.962 02:48:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:10.963 02:48:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:28:10.963 02:48:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:10.963 02:48:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:28:10.963 02:48:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:10.963 02:48:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:10.963 02:48:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.963 02:48:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:10.963 02:48:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.338 02:48:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:12.338 00:28:12.338 real 1m10.065s 00:28:12.338 user 4m15.716s 00:28:12.338 sys 0m7.068s 00:28:12.338 02:48:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:12.338 02:48:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:12.338 ************************************ 00:28:12.338 END TEST nvmf_initiator_timeout 00:28:12.338 ************************************ 00:28:12.338 02:48:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:12.338 02:48:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:12.338 02:48:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:12.338 02:48:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:12.338 02:48:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:14.253 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:14.253 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:14.253 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:14.253 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:14.253 02:48:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:14.513 ************************************ 00:28:14.513 START TEST nvmf_perf_adq 00:28:14.513 ************************************ 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:14.513 * Looking for test storage... 00:28:14.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:14.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.513 --rc genhtml_branch_coverage=1 00:28:14.513 --rc genhtml_function_coverage=1 00:28:14.513 --rc genhtml_legend=1 00:28:14.513 --rc geninfo_all_blocks=1 00:28:14.513 --rc geninfo_unexecuted_blocks=1 00:28:14.513 00:28:14.513 ' 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:14.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.513 --rc genhtml_branch_coverage=1 00:28:14.513 --rc genhtml_function_coverage=1 00:28:14.513 --rc genhtml_legend=1 00:28:14.513 --rc geninfo_all_blocks=1 00:28:14.513 --rc geninfo_unexecuted_blocks=1 00:28:14.513 00:28:14.513 ' 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:14.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.513 --rc genhtml_branch_coverage=1 00:28:14.513 --rc genhtml_function_coverage=1 00:28:14.513 --rc genhtml_legend=1 00:28:14.513 --rc geninfo_all_blocks=1 00:28:14.513 --rc geninfo_unexecuted_blocks=1 00:28:14.513 00:28:14.513 ' 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:14.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.513 --rc genhtml_branch_coverage=1 00:28:14.513 --rc genhtml_function_coverage=1 00:28:14.513 --rc genhtml_legend=1 00:28:14.513 --rc geninfo_all_blocks=1 00:28:14.513 --rc geninfo_unexecuted_blocks=1 00:28:14.513 00:28:14.513 ' 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.513 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.514 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.514 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.514 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:14.514 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.514 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:14.514 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:14.514 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:14.514 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:14.514 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.514 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.514 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:14.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:14.514 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:14.514 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:14.514 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:14.514 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:14.514 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:14.514 02:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:16.417 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:16.418 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:16.418 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:16.418 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:16.418 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:16.418 02:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:17.355 02:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:19.885 02:48:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:25.156 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:25.157 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:25.157 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:25.157 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:25.157 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:25.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:25.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:28:25.157 00:28:25.157 --- 10.0.0.2 ping statistics --- 00:28:25.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.157 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:25.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:25.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:28:25.157 00:28:25.157 --- 10.0.0.1 ping statistics --- 00:28:25.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.157 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:25.157 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3046855 00:28:25.158 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:25.158 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3046855 00:28:25.158 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3046855 ']' 00:28:25.158 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.158 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:25.158 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.158 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:25.158 02:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:25.158 [2024-11-17 02:48:33.342354] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:25.158 [2024-11-17 02:48:33.342503] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.158 [2024-11-17 02:48:33.496296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:25.416 [2024-11-17 02:48:33.642581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.416 [2024-11-17 02:48:33.642670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.416 [2024-11-17 02:48:33.642697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.416 [2024-11-17 02:48:33.642722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.416 [2024-11-17 02:48:33.642742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.416 [2024-11-17 02:48:33.645635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.416 [2024-11-17 02:48:33.645699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:25.416 [2024-11-17 02:48:33.645766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.416 [2024-11-17 02:48:33.645772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:25.982 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:25.982 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:25.982 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:25.982 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:25.982 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:25.982 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.982 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:25.982 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:25.982 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:25.982 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.982 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:25.982 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.982 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:25.982 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:25.982 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.982 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:25.982 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.982 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:25.982 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.982 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:26.549 [2024-11-17 02:48:34.707175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:26.549 Malloc1 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:26.549 [2024-11-17 02:48:34.826751] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3047023 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:26.549 02:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:28.452 02:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:28.452 02:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.452 02:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:28.452 02:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.452 02:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:28.452 "tick_rate": 2700000000, 00:28:28.452 "poll_groups": [ 00:28:28.452 { 00:28:28.452 "name": "nvmf_tgt_poll_group_000", 00:28:28.452 "admin_qpairs": 1, 00:28:28.452 "io_qpairs": 1, 00:28:28.452 "current_admin_qpairs": 1, 00:28:28.452 "current_io_qpairs": 1, 00:28:28.452 "pending_bdev_io": 0, 00:28:28.452 "completed_nvme_io": 16922, 00:28:28.452 "transports": [ 00:28:28.452 { 00:28:28.452 "trtype": "TCP" 00:28:28.452 } 00:28:28.452 ] 00:28:28.452 }, 00:28:28.452 { 00:28:28.452 "name": "nvmf_tgt_poll_group_001", 00:28:28.452 "admin_qpairs": 0, 00:28:28.452 "io_qpairs": 1, 00:28:28.452 "current_admin_qpairs": 0, 00:28:28.452 "current_io_qpairs": 1, 00:28:28.452 "pending_bdev_io": 0, 00:28:28.452 "completed_nvme_io": 17068, 00:28:28.452 "transports": [ 00:28:28.452 { 00:28:28.452 "trtype": "TCP" 00:28:28.452 } 00:28:28.452 ] 00:28:28.452 }, 00:28:28.452 { 00:28:28.452 "name": "nvmf_tgt_poll_group_002", 00:28:28.452 "admin_qpairs": 0, 00:28:28.452 "io_qpairs": 1, 00:28:28.452 "current_admin_qpairs": 0, 00:28:28.452 "current_io_qpairs": 1, 00:28:28.452 "pending_bdev_io": 0, 00:28:28.452 "completed_nvme_io": 17327, 00:28:28.452 "transports": [ 00:28:28.452 { 00:28:28.452 "trtype": "TCP" 00:28:28.452 } 00:28:28.452 ] 00:28:28.452 }, 00:28:28.452 { 00:28:28.452 "name": "nvmf_tgt_poll_group_003", 00:28:28.452 "admin_qpairs": 0, 00:28:28.452 "io_qpairs": 1, 00:28:28.452 "current_admin_qpairs": 0, 00:28:28.452 "current_io_qpairs": 1, 00:28:28.452 "pending_bdev_io": 0, 00:28:28.452 "completed_nvme_io": 16894, 00:28:28.452 "transports": [ 00:28:28.452 { 00:28:28.452 "trtype": "TCP" 00:28:28.452 } 00:28:28.452 ] 00:28:28.452 } 00:28:28.452 ] 00:28:28.452 }' 00:28:28.452 02:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:28.452 02:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:28.452 02:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:28.452 02:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:28.452 02:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3047023 00:28:36.562 Initializing NVMe Controllers 00:28:36.562 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:36.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:36.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:36.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:36.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:36.562 Initialization complete. Launching workers. 00:28:36.562 ======================================================== 00:28:36.562 Latency(us) 00:28:36.562 Device Information : IOPS MiB/s Average min max 00:28:36.562 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9004.87 35.18 7109.38 3234.66 11639.80 00:28:36.562 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9029.47 35.27 7087.50 3277.36 11870.78 00:28:36.562 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9204.46 35.95 6954.54 3215.12 11191.82 00:28:36.562 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9041.27 35.32 7085.46 3194.18 44437.21 00:28:36.562 ======================================================== 00:28:36.562 Total : 36280.07 141.72 7058.69 3194.18 44437.21 00:28:36.562 00:28:36.820 02:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:36.820 02:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:36.820 02:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:36.820 02:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:36.820 02:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:36.820 02:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:36.821 02:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:36.821 rmmod nvme_tcp 00:28:36.821 rmmod nvme_fabrics 00:28:36.821 rmmod nvme_keyring 00:28:36.821 02:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:36.821 02:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:36.821 02:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:36.821 02:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3046855 ']' 00:28:36.821 02:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3046855 00:28:36.821 02:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3046855 ']' 00:28:36.821 02:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3046855 00:28:36.821 02:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:36.821 02:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:36.821 02:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3046855 00:28:36.821 02:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:36.821 02:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:36.821 02:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3046855' 00:28:36.821 killing process with pid 3046855 00:28:36.821 02:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3046855 00:28:36.821 02:48:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3046855 00:28:38.196 02:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:38.196 02:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:38.196 02:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:38.196 02:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:38.196 02:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:38.196 02:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:38.196 02:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:38.196 02:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:38.196 02:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:38.196 02:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.196 02:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.196 02:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.098 02:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:40.098 02:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:40.098 02:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:40.098 02:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:41.062 02:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:42.962 02:48:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:48.318 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:48.318 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:48.318 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.318 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:48.318 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:48.318 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:48.318 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.318 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.318 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.318 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:48.318 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:48.318 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:48.318 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.318 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:48.319 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:48.319 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:48.319 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:48.319 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:48.319 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:48.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:28:48.320 00:28:48.320 --- 10.0.0.2 ping statistics --- 00:28:48.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.320 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:48.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:28:48.320 00:28:48.320 --- 10.0.0.1 ping statistics --- 00:28:48.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.320 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:48.320 net.core.busy_poll = 1 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:48.320 net.core.busy_read = 1 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3049776 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3049776 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3049776 ']' 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:48.320 02:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.579 [2024-11-17 02:48:56.830873] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:48.579 [2024-11-17 02:48:56.831011] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.579 [2024-11-17 02:48:56.983078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:48.837 [2024-11-17 02:48:57.124601] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.837 [2024-11-17 02:48:57.124690] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.837 [2024-11-17 02:48:57.124715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.837 [2024-11-17 02:48:57.124739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.837 [2024-11-17 02:48:57.124758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.837 [2024-11-17 02:48:57.127595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.837 [2024-11-17 02:48:57.127667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.837 [2024-11-17 02:48:57.127769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.837 [2024-11-17 02:48:57.127776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:49.401 02:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:49.401 02:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:49.401 02:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:49.401 02:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:49.401 02:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:49.401 02:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:49.401 02:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:49.401 02:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:49.401 02:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:49.401 02:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.401 02:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:49.659 02:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.659 02:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:49.659 02:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:49.659 02:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.659 02:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:49.659 02:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.659 02:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:49.659 02:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.659 02:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:49.917 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:49.918 [2024-11-17 02:48:58.250347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:49.918 Malloc1 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:49.918 [2024-11-17 02:48:58.367494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3050049 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:49.918 02:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:52.448 02:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:52.448 02:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.448 02:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:52.448 02:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.448 02:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:52.448 "tick_rate": 2700000000, 00:28:52.448 "poll_groups": [ 00:28:52.448 { 00:28:52.448 "name": "nvmf_tgt_poll_group_000", 00:28:52.448 "admin_qpairs": 1, 00:28:52.448 "io_qpairs": 2, 00:28:52.448 "current_admin_qpairs": 1, 00:28:52.448 "current_io_qpairs": 2, 00:28:52.448 "pending_bdev_io": 0, 00:28:52.448 "completed_nvme_io": 19369, 00:28:52.448 "transports": [ 00:28:52.448 { 00:28:52.448 "trtype": "TCP" 00:28:52.448 } 00:28:52.448 ] 00:28:52.448 }, 00:28:52.448 { 00:28:52.448 "name": "nvmf_tgt_poll_group_001", 00:28:52.448 "admin_qpairs": 0, 00:28:52.448 "io_qpairs": 2, 00:28:52.448 "current_admin_qpairs": 0, 00:28:52.448 "current_io_qpairs": 2, 00:28:52.448 "pending_bdev_io": 0, 00:28:52.448 "completed_nvme_io": 19426, 00:28:52.448 "transports": [ 00:28:52.448 { 00:28:52.448 "trtype": "TCP" 00:28:52.448 } 00:28:52.448 ] 00:28:52.448 }, 00:28:52.448 { 00:28:52.448 "name": "nvmf_tgt_poll_group_002", 00:28:52.448 "admin_qpairs": 0, 00:28:52.448 "io_qpairs": 0, 00:28:52.448 "current_admin_qpairs": 0, 00:28:52.448 "current_io_qpairs": 0, 00:28:52.448 "pending_bdev_io": 0, 00:28:52.448 "completed_nvme_io": 0, 00:28:52.448 "transports": [ 00:28:52.448 { 00:28:52.448 "trtype": "TCP" 00:28:52.448 } 00:28:52.448 ] 00:28:52.448 }, 00:28:52.448 { 00:28:52.448 "name": "nvmf_tgt_poll_group_003", 00:28:52.448 "admin_qpairs": 0, 00:28:52.448 "io_qpairs": 0, 00:28:52.448 "current_admin_qpairs": 0, 00:28:52.448 "current_io_qpairs": 0, 00:28:52.448 "pending_bdev_io": 0, 00:28:52.448 "completed_nvme_io": 0, 00:28:52.448 "transports": [ 00:28:52.448 { 00:28:52.448 "trtype": "TCP" 00:28:52.448 } 00:28:52.448 ] 00:28:52.448 } 00:28:52.448 ] 00:28:52.448 }' 00:28:52.448 02:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:52.448 02:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:52.448 02:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:52.448 02:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:52.448 02:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3050049 00:29:00.562 Initializing NVMe Controllers 00:29:00.562 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:00.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:00.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:00.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:00.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:00.562 Initialization complete. Launching workers. 00:29:00.562 ======================================================== 00:29:00.562 Latency(us) 00:29:00.562 Device Information : IOPS MiB/s Average min max 00:29:00.562 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5371.39 20.98 11955.98 2315.10 58107.74 00:29:00.562 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5196.39 20.30 12338.70 1625.42 57316.68 00:29:00.562 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5242.39 20.48 12212.30 2761.54 57822.51 00:29:00.562 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5326.19 20.81 12019.86 2379.42 58153.59 00:29:00.562 ======================================================== 00:29:00.562 Total : 21136.35 82.56 12129.74 1625.42 58153.59 00:29:00.562 00:29:00.562 02:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:29:00.562 02:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:00.562 02:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:00.562 02:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:00.562 02:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:00.562 02:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:00.562 02:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:00.562 rmmod nvme_tcp 00:29:00.562 rmmod nvme_fabrics 00:29:00.562 rmmod nvme_keyring 00:29:00.562 02:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:00.562 02:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:00.562 02:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:00.562 02:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3049776 ']' 00:29:00.562 02:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3049776 00:29:00.562 02:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3049776 ']' 00:29:00.562 02:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3049776 00:29:00.562 02:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:29:00.562 02:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:00.562 02:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3049776 00:29:00.562 02:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:00.562 02:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:00.562 02:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3049776' 00:29:00.562 killing process with pid 3049776 00:29:00.562 02:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3049776 00:29:00.562 02:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3049776 00:29:01.938 02:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:01.938 02:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:01.938 02:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:01.938 02:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:01.938 02:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:29:01.938 02:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:01.938 02:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:29:01.938 02:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:01.938 02:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:01.938 02:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.938 02:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:01.938 02:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:29:03.843 00:29:03.843 real 0m49.366s 00:29:03.843 user 2m54.922s 00:29:03.843 sys 0m9.430s 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:03.843 ************************************ 00:29:03.843 END TEST nvmf_perf_adq 00:29:03.843 ************************************ 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:03.843 ************************************ 00:29:03.843 START TEST nvmf_shutdown 00:29:03.843 ************************************ 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:03.843 * Looking for test storage... 00:29:03.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:03.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.843 --rc genhtml_branch_coverage=1 00:29:03.843 --rc genhtml_function_coverage=1 00:29:03.843 --rc genhtml_legend=1 00:29:03.843 --rc geninfo_all_blocks=1 00:29:03.843 --rc geninfo_unexecuted_blocks=1 00:29:03.843 00:29:03.843 ' 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:03.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.843 --rc genhtml_branch_coverage=1 00:29:03.843 --rc genhtml_function_coverage=1 00:29:03.843 --rc genhtml_legend=1 00:29:03.843 --rc geninfo_all_blocks=1 00:29:03.843 --rc geninfo_unexecuted_blocks=1 00:29:03.843 00:29:03.843 ' 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:03.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.843 --rc genhtml_branch_coverage=1 00:29:03.843 --rc genhtml_function_coverage=1 00:29:03.843 --rc genhtml_legend=1 00:29:03.843 --rc geninfo_all_blocks=1 00:29:03.843 --rc geninfo_unexecuted_blocks=1 00:29:03.843 00:29:03.843 ' 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:03.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.843 --rc genhtml_branch_coverage=1 00:29:03.843 --rc genhtml_function_coverage=1 00:29:03.843 --rc genhtml_legend=1 00:29:03.843 --rc geninfo_all_blocks=1 00:29:03.843 --rc geninfo_unexecuted_blocks=1 00:29:03.843 00:29:03.843 ' 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:03.843 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:03.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:03.844 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:04.103 ************************************ 00:29:04.103 START TEST nvmf_shutdown_tc1 00:29:04.103 ************************************ 00:29:04.103 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:29:04.103 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:29:04.103 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:04.103 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:04.103 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.103 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:04.103 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:04.103 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:04.103 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.103 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.103 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.103 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:04.103 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:04.103 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:04.103 02:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:06.005 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:06.005 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:06.005 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:06.005 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:06.005 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:06.006 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:06.006 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:06.006 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:06.006 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:06.006 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:06.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:06.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:29:06.264 00:29:06.264 --- 10.0.0.2 ping statistics --- 00:29:06.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.264 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:06.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:06.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:29:06.264 00:29:06.264 --- 10.0.0.1 ping statistics --- 00:29:06.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.264 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3053340 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3053340 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3053340 ']' 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.264 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:06.264 [2024-11-17 02:49:14.597670] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:06.264 [2024-11-17 02:49:14.597811] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.524 [2024-11-17 02:49:14.753056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:06.524 [2024-11-17 02:49:14.896057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.524 [2024-11-17 02:49:14.896164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.524 [2024-11-17 02:49:14.896191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:06.524 [2024-11-17 02:49:14.896215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:06.524 [2024-11-17 02:49:14.896236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.524 [2024-11-17 02:49:14.899142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:06.524 [2024-11-17 02:49:14.899241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:06.524 [2024-11-17 02:49:14.899297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:06.524 [2024-11-17 02:49:14.899298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:07.457 [2024-11-17 02:49:15.622338] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:07.457 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.458 02:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:07.458 Malloc1 00:29:07.458 [2024-11-17 02:49:15.762108] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.458 Malloc2 00:29:07.716 Malloc3 00:29:07.716 Malloc4 00:29:07.716 Malloc5 00:29:07.973 Malloc6 00:29:07.973 Malloc7 00:29:08.232 Malloc8 00:29:08.232 Malloc9 00:29:08.232 Malloc10 00:29:08.232 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.232 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:08.232 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:08.232 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:08.232 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3053653 00:29:08.232 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3053653 /var/tmp/bdevperf.sock 00:29:08.232 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3053653 ']' 00:29:08.232 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:08.232 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:08.232 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:08.232 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:08.232 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:08.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:08.232 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:08.232 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:08.232 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:08.232 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:08.232 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.232 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.232 { 00:29:08.232 "params": { 00:29:08.232 "name": "Nvme$subsystem", 00:29:08.232 "trtype": "$TEST_TRANSPORT", 00:29:08.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.232 "adrfam": "ipv4", 00:29:08.232 "trsvcid": "$NVMF_PORT", 00:29:08.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.232 "hdgst": ${hdgst:-false}, 00:29:08.232 "ddgst": ${ddgst:-false} 00:29:08.232 }, 00:29:08.232 "method": "bdev_nvme_attach_controller" 00:29:08.232 } 00:29:08.232 EOF 00:29:08.232 )") 00:29:08.232 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:08.232 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.491 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.491 { 00:29:08.491 "params": { 00:29:08.491 "name": "Nvme$subsystem", 00:29:08.491 "trtype": "$TEST_TRANSPORT", 00:29:08.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.491 "adrfam": "ipv4", 00:29:08.491 "trsvcid": "$NVMF_PORT", 00:29:08.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.491 "hdgst": ${hdgst:-false}, 00:29:08.491 "ddgst": ${ddgst:-false} 00:29:08.491 }, 00:29:08.491 "method": "bdev_nvme_attach_controller" 00:29:08.491 } 00:29:08.491 EOF 00:29:08.491 )") 00:29:08.491 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:08.491 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.491 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.491 { 00:29:08.491 "params": { 00:29:08.491 "name": "Nvme$subsystem", 00:29:08.491 "trtype": "$TEST_TRANSPORT", 00:29:08.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.491 "adrfam": "ipv4", 00:29:08.491 "trsvcid": "$NVMF_PORT", 00:29:08.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.491 "hdgst": ${hdgst:-false}, 00:29:08.491 "ddgst": ${ddgst:-false} 00:29:08.491 }, 00:29:08.491 "method": "bdev_nvme_attach_controller" 00:29:08.491 } 00:29:08.491 EOF 00:29:08.491 )") 00:29:08.491 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:08.491 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.491 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.491 { 00:29:08.491 "params": { 00:29:08.491 "name": "Nvme$subsystem", 00:29:08.491 "trtype": "$TEST_TRANSPORT", 00:29:08.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.491 "adrfam": "ipv4", 00:29:08.491 "trsvcid": "$NVMF_PORT", 00:29:08.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.491 "hdgst": ${hdgst:-false}, 00:29:08.491 "ddgst": ${ddgst:-false} 00:29:08.491 }, 00:29:08.491 "method": "bdev_nvme_attach_controller" 00:29:08.491 } 00:29:08.491 EOF 00:29:08.491 )") 00:29:08.491 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:08.491 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.491 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.491 { 00:29:08.491 "params": { 00:29:08.491 "name": "Nvme$subsystem", 00:29:08.491 "trtype": "$TEST_TRANSPORT", 00:29:08.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.491 "adrfam": "ipv4", 00:29:08.491 "trsvcid": "$NVMF_PORT", 00:29:08.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.491 "hdgst": ${hdgst:-false}, 00:29:08.491 "ddgst": ${ddgst:-false} 00:29:08.491 }, 00:29:08.491 "method": "bdev_nvme_attach_controller" 00:29:08.491 } 00:29:08.491 EOF 00:29:08.491 )") 00:29:08.491 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:08.491 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.491 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.491 { 00:29:08.491 "params": { 00:29:08.491 "name": "Nvme$subsystem", 00:29:08.491 "trtype": "$TEST_TRANSPORT", 00:29:08.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.491 "adrfam": "ipv4", 00:29:08.491 "trsvcid": "$NVMF_PORT", 00:29:08.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.491 "hdgst": ${hdgst:-false}, 00:29:08.491 "ddgst": ${ddgst:-false} 00:29:08.491 }, 00:29:08.491 "method": "bdev_nvme_attach_controller" 00:29:08.491 } 00:29:08.491 EOF 00:29:08.491 )") 00:29:08.491 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:08.491 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.491 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.491 { 00:29:08.491 "params": { 00:29:08.491 "name": "Nvme$subsystem", 00:29:08.491 "trtype": "$TEST_TRANSPORT", 00:29:08.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.491 "adrfam": "ipv4", 00:29:08.491 "trsvcid": "$NVMF_PORT", 00:29:08.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.492 "hdgst": ${hdgst:-false}, 00:29:08.492 "ddgst": ${ddgst:-false} 00:29:08.492 }, 00:29:08.492 "method": "bdev_nvme_attach_controller" 00:29:08.492 } 00:29:08.492 EOF 00:29:08.492 )") 00:29:08.492 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:08.492 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.492 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.492 { 00:29:08.492 "params": { 00:29:08.492 "name": "Nvme$subsystem", 00:29:08.492 "trtype": "$TEST_TRANSPORT", 00:29:08.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.492 "adrfam": "ipv4", 00:29:08.492 "trsvcid": "$NVMF_PORT", 00:29:08.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.492 "hdgst": ${hdgst:-false}, 00:29:08.492 "ddgst": ${ddgst:-false} 00:29:08.492 }, 00:29:08.492 "method": "bdev_nvme_attach_controller" 00:29:08.492 } 00:29:08.492 EOF 00:29:08.492 )") 00:29:08.492 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:08.492 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.492 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.492 { 00:29:08.492 "params": { 00:29:08.492 "name": "Nvme$subsystem", 00:29:08.492 "trtype": "$TEST_TRANSPORT", 00:29:08.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.492 "adrfam": "ipv4", 00:29:08.492 "trsvcid": "$NVMF_PORT", 00:29:08.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.492 "hdgst": ${hdgst:-false}, 00:29:08.492 "ddgst": ${ddgst:-false} 00:29:08.492 }, 00:29:08.492 "method": "bdev_nvme_attach_controller" 00:29:08.492 } 00:29:08.492 EOF 00:29:08.492 )") 00:29:08.492 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:08.492 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.492 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.492 { 00:29:08.492 "params": { 00:29:08.492 "name": "Nvme$subsystem", 00:29:08.492 "trtype": "$TEST_TRANSPORT", 00:29:08.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.492 "adrfam": "ipv4", 00:29:08.492 "trsvcid": "$NVMF_PORT", 00:29:08.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.492 "hdgst": ${hdgst:-false}, 00:29:08.492 "ddgst": ${ddgst:-false} 00:29:08.492 }, 00:29:08.492 "method": "bdev_nvme_attach_controller" 00:29:08.492 } 00:29:08.492 EOF 00:29:08.492 )") 00:29:08.492 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:08.492 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:08.492 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:08.492 02:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:08.492 "params": { 00:29:08.492 "name": "Nvme1", 00:29:08.492 "trtype": "tcp", 00:29:08.492 "traddr": "10.0.0.2", 00:29:08.492 "adrfam": "ipv4", 00:29:08.492 "trsvcid": "4420", 00:29:08.492 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:08.492 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:08.492 "hdgst": false, 00:29:08.492 "ddgst": false 00:29:08.492 }, 00:29:08.492 "method": "bdev_nvme_attach_controller" 00:29:08.492 },{ 00:29:08.492 "params": { 00:29:08.492 "name": "Nvme2", 00:29:08.492 "trtype": "tcp", 00:29:08.492 "traddr": "10.0.0.2", 00:29:08.492 "adrfam": "ipv4", 00:29:08.492 "trsvcid": "4420", 00:29:08.492 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:08.492 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:08.492 "hdgst": false, 00:29:08.492 "ddgst": false 00:29:08.492 }, 00:29:08.492 "method": "bdev_nvme_attach_controller" 00:29:08.492 },{ 00:29:08.492 "params": { 00:29:08.492 "name": "Nvme3", 00:29:08.492 "trtype": "tcp", 00:29:08.492 "traddr": "10.0.0.2", 00:29:08.492 "adrfam": "ipv4", 00:29:08.492 "trsvcid": "4420", 00:29:08.492 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:08.492 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:08.492 "hdgst": false, 00:29:08.492 "ddgst": false 00:29:08.492 }, 00:29:08.492 "method": "bdev_nvme_attach_controller" 00:29:08.492 },{ 00:29:08.492 "params": { 00:29:08.492 "name": "Nvme4", 00:29:08.492 "trtype": "tcp", 00:29:08.492 "traddr": "10.0.0.2", 00:29:08.492 "adrfam": "ipv4", 00:29:08.492 "trsvcid": "4420", 00:29:08.492 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:08.492 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:08.492 "hdgst": false, 00:29:08.492 "ddgst": false 00:29:08.492 }, 00:29:08.492 "method": "bdev_nvme_attach_controller" 00:29:08.492 },{ 00:29:08.492 "params": { 00:29:08.492 "name": "Nvme5", 00:29:08.492 "trtype": "tcp", 00:29:08.492 "traddr": "10.0.0.2", 00:29:08.492 "adrfam": "ipv4", 00:29:08.492 "trsvcid": "4420", 00:29:08.492 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:08.492 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:08.492 "hdgst": false, 00:29:08.492 "ddgst": false 00:29:08.492 }, 00:29:08.492 "method": "bdev_nvme_attach_controller" 00:29:08.492 },{ 00:29:08.492 "params": { 00:29:08.492 "name": "Nvme6", 00:29:08.492 "trtype": "tcp", 00:29:08.492 "traddr": "10.0.0.2", 00:29:08.492 "adrfam": "ipv4", 00:29:08.492 "trsvcid": "4420", 00:29:08.492 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:08.492 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:08.492 "hdgst": false, 00:29:08.492 "ddgst": false 00:29:08.492 }, 00:29:08.492 "method": "bdev_nvme_attach_controller" 00:29:08.492 },{ 00:29:08.492 "params": { 00:29:08.492 "name": "Nvme7", 00:29:08.492 "trtype": "tcp", 00:29:08.492 "traddr": "10.0.0.2", 00:29:08.492 "adrfam": "ipv4", 00:29:08.492 "trsvcid": "4420", 00:29:08.492 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:08.492 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:08.492 "hdgst": false, 00:29:08.492 "ddgst": false 00:29:08.492 }, 00:29:08.492 "method": "bdev_nvme_attach_controller" 00:29:08.492 },{ 00:29:08.492 "params": { 00:29:08.492 "name": "Nvme8", 00:29:08.492 "trtype": "tcp", 00:29:08.492 "traddr": "10.0.0.2", 00:29:08.492 "adrfam": "ipv4", 00:29:08.492 "trsvcid": "4420", 00:29:08.492 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:08.492 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:08.492 "hdgst": false, 00:29:08.492 "ddgst": false 00:29:08.492 }, 00:29:08.492 "method": "bdev_nvme_attach_controller" 00:29:08.492 },{ 00:29:08.492 "params": { 00:29:08.492 "name": "Nvme9", 00:29:08.492 "trtype": "tcp", 00:29:08.492 "traddr": "10.0.0.2", 00:29:08.492 "adrfam": "ipv4", 00:29:08.492 "trsvcid": "4420", 00:29:08.492 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:08.492 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:08.492 "hdgst": false, 00:29:08.492 "ddgst": false 00:29:08.492 }, 00:29:08.492 "method": "bdev_nvme_attach_controller" 00:29:08.492 },{ 00:29:08.492 "params": { 00:29:08.492 "name": "Nvme10", 00:29:08.492 "trtype": "tcp", 00:29:08.492 "traddr": "10.0.0.2", 00:29:08.492 "adrfam": "ipv4", 00:29:08.492 "trsvcid": "4420", 00:29:08.492 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:08.492 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:08.492 "hdgst": false, 00:29:08.492 "ddgst": false 00:29:08.492 }, 00:29:08.492 "method": "bdev_nvme_attach_controller" 00:29:08.492 }' 00:29:08.492 [2024-11-17 02:49:16.770824] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:08.492 [2024-11-17 02:49:16.770953] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:08.492 [2024-11-17 02:49:16.918438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.750 [2024-11-17 02:49:17.048471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.278 02:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:11.278 02:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:11.279 02:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:11.279 02:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.279 02:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.279 02:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.279 02:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3053653 00:29:11.279 02:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:29:11.279 02:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:29:12.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3053653 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3053340 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.214 { 00:29:12.214 "params": { 00:29:12.214 "name": "Nvme$subsystem", 00:29:12.214 "trtype": "$TEST_TRANSPORT", 00:29:12.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.214 "adrfam": "ipv4", 00:29:12.214 "trsvcid": "$NVMF_PORT", 00:29:12.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.214 "hdgst": ${hdgst:-false}, 00:29:12.214 "ddgst": ${ddgst:-false} 00:29:12.214 }, 00:29:12.214 "method": "bdev_nvme_attach_controller" 00:29:12.214 } 00:29:12.214 EOF 00:29:12.214 )") 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.214 { 00:29:12.214 "params": { 00:29:12.214 "name": "Nvme$subsystem", 00:29:12.214 "trtype": "$TEST_TRANSPORT", 00:29:12.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.214 "adrfam": "ipv4", 00:29:12.214 "trsvcid": "$NVMF_PORT", 00:29:12.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.214 "hdgst": ${hdgst:-false}, 00:29:12.214 "ddgst": ${ddgst:-false} 00:29:12.214 }, 00:29:12.214 "method": "bdev_nvme_attach_controller" 00:29:12.214 } 00:29:12.214 EOF 00:29:12.214 )") 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.214 { 00:29:12.214 "params": { 00:29:12.214 "name": "Nvme$subsystem", 00:29:12.214 "trtype": "$TEST_TRANSPORT", 00:29:12.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.214 "adrfam": "ipv4", 00:29:12.214 "trsvcid": "$NVMF_PORT", 00:29:12.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.214 "hdgst": ${hdgst:-false}, 00:29:12.214 "ddgst": ${ddgst:-false} 00:29:12.214 }, 00:29:12.214 "method": "bdev_nvme_attach_controller" 00:29:12.214 } 00:29:12.214 EOF 00:29:12.214 )") 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.214 { 00:29:12.214 "params": { 00:29:12.214 "name": "Nvme$subsystem", 00:29:12.214 "trtype": "$TEST_TRANSPORT", 00:29:12.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.214 "adrfam": "ipv4", 00:29:12.214 "trsvcid": "$NVMF_PORT", 00:29:12.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.214 "hdgst": ${hdgst:-false}, 00:29:12.214 "ddgst": ${ddgst:-false} 00:29:12.214 }, 00:29:12.214 "method": "bdev_nvme_attach_controller" 00:29:12.214 } 00:29:12.214 EOF 00:29:12.214 )") 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.214 { 00:29:12.214 "params": { 00:29:12.214 "name": "Nvme$subsystem", 00:29:12.214 "trtype": "$TEST_TRANSPORT", 00:29:12.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.214 "adrfam": "ipv4", 00:29:12.214 "trsvcid": "$NVMF_PORT", 00:29:12.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.214 "hdgst": ${hdgst:-false}, 00:29:12.214 "ddgst": ${ddgst:-false} 00:29:12.214 }, 00:29:12.214 "method": "bdev_nvme_attach_controller" 00:29:12.214 } 00:29:12.214 EOF 00:29:12.214 )") 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.214 { 00:29:12.214 "params": { 00:29:12.214 "name": "Nvme$subsystem", 00:29:12.214 "trtype": "$TEST_TRANSPORT", 00:29:12.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.214 "adrfam": "ipv4", 00:29:12.214 "trsvcid": "$NVMF_PORT", 00:29:12.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.214 "hdgst": ${hdgst:-false}, 00:29:12.214 "ddgst": ${ddgst:-false} 00:29:12.214 }, 00:29:12.214 "method": "bdev_nvme_attach_controller" 00:29:12.214 } 00:29:12.214 EOF 00:29:12.214 )") 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.214 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.214 { 00:29:12.214 "params": { 00:29:12.214 "name": "Nvme$subsystem", 00:29:12.214 "trtype": "$TEST_TRANSPORT", 00:29:12.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.214 "adrfam": "ipv4", 00:29:12.214 "trsvcid": "$NVMF_PORT", 00:29:12.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.214 "hdgst": ${hdgst:-false}, 00:29:12.214 "ddgst": ${ddgst:-false} 00:29:12.214 }, 00:29:12.214 "method": "bdev_nvme_attach_controller" 00:29:12.214 } 00:29:12.214 EOF 00:29:12.214 )") 00:29:12.215 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:12.215 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.215 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.215 { 00:29:12.215 "params": { 00:29:12.215 "name": "Nvme$subsystem", 00:29:12.215 "trtype": "$TEST_TRANSPORT", 00:29:12.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.215 "adrfam": "ipv4", 00:29:12.215 "trsvcid": "$NVMF_PORT", 00:29:12.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.215 "hdgst": ${hdgst:-false}, 00:29:12.215 "ddgst": ${ddgst:-false} 00:29:12.215 }, 00:29:12.215 "method": "bdev_nvme_attach_controller" 00:29:12.215 } 00:29:12.215 EOF 00:29:12.215 )") 00:29:12.215 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:12.215 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.215 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.215 { 00:29:12.215 "params": { 00:29:12.215 "name": "Nvme$subsystem", 00:29:12.215 "trtype": "$TEST_TRANSPORT", 00:29:12.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.215 "adrfam": "ipv4", 00:29:12.215 "trsvcid": "$NVMF_PORT", 00:29:12.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.215 "hdgst": ${hdgst:-false}, 00:29:12.215 "ddgst": ${ddgst:-false} 00:29:12.215 }, 00:29:12.215 "method": "bdev_nvme_attach_controller" 00:29:12.215 } 00:29:12.215 EOF 00:29:12.215 )") 00:29:12.215 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:12.215 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.215 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.215 { 00:29:12.215 "params": { 00:29:12.215 "name": "Nvme$subsystem", 00:29:12.215 "trtype": "$TEST_TRANSPORT", 00:29:12.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.215 "adrfam": "ipv4", 00:29:12.215 "trsvcid": "$NVMF_PORT", 00:29:12.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.215 "hdgst": ${hdgst:-false}, 00:29:12.215 "ddgst": ${ddgst:-false} 00:29:12.215 }, 00:29:12.215 "method": "bdev_nvme_attach_controller" 00:29:12.215 } 00:29:12.215 EOF 00:29:12.215 )") 00:29:12.215 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:12.215 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:12.215 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:12.215 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:12.215 "params": { 00:29:12.215 "name": "Nvme1", 00:29:12.215 "trtype": "tcp", 00:29:12.215 "traddr": "10.0.0.2", 00:29:12.215 "adrfam": "ipv4", 00:29:12.215 "trsvcid": "4420", 00:29:12.215 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:12.215 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:12.215 "hdgst": false, 00:29:12.215 "ddgst": false 00:29:12.215 }, 00:29:12.215 "method": "bdev_nvme_attach_controller" 00:29:12.215 },{ 00:29:12.215 "params": { 00:29:12.215 "name": "Nvme2", 00:29:12.215 "trtype": "tcp", 00:29:12.215 "traddr": "10.0.0.2", 00:29:12.215 "adrfam": "ipv4", 00:29:12.215 "trsvcid": "4420", 00:29:12.215 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:12.215 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:12.215 "hdgst": false, 00:29:12.215 "ddgst": false 00:29:12.215 }, 00:29:12.215 "method": "bdev_nvme_attach_controller" 00:29:12.215 },{ 00:29:12.215 "params": { 00:29:12.215 "name": "Nvme3", 00:29:12.215 "trtype": "tcp", 00:29:12.215 "traddr": "10.0.0.2", 00:29:12.215 "adrfam": "ipv4", 00:29:12.215 "trsvcid": "4420", 00:29:12.215 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:12.215 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:12.215 "hdgst": false, 00:29:12.215 "ddgst": false 00:29:12.215 }, 00:29:12.215 "method": "bdev_nvme_attach_controller" 00:29:12.215 },{ 00:29:12.215 "params": { 00:29:12.215 "name": "Nvme4", 00:29:12.215 "trtype": "tcp", 00:29:12.215 "traddr": "10.0.0.2", 00:29:12.215 "adrfam": "ipv4", 00:29:12.215 "trsvcid": "4420", 00:29:12.215 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:12.215 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:12.215 "hdgst": false, 00:29:12.215 "ddgst": false 00:29:12.215 }, 00:29:12.215 "method": "bdev_nvme_attach_controller" 00:29:12.215 },{ 00:29:12.215 "params": { 00:29:12.215 "name": "Nvme5", 00:29:12.215 "trtype": "tcp", 00:29:12.215 "traddr": "10.0.0.2", 00:29:12.215 "adrfam": "ipv4", 00:29:12.215 "trsvcid": "4420", 00:29:12.215 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:12.215 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:12.215 "hdgst": false, 00:29:12.215 "ddgst": false 00:29:12.215 }, 00:29:12.215 "method": "bdev_nvme_attach_controller" 00:29:12.215 },{ 00:29:12.215 "params": { 00:29:12.215 "name": "Nvme6", 00:29:12.215 "trtype": "tcp", 00:29:12.215 "traddr": "10.0.0.2", 00:29:12.215 "adrfam": "ipv4", 00:29:12.215 "trsvcid": "4420", 00:29:12.215 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:12.215 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:12.215 "hdgst": false, 00:29:12.215 "ddgst": false 00:29:12.215 }, 00:29:12.215 "method": "bdev_nvme_attach_controller" 00:29:12.215 },{ 00:29:12.215 "params": { 00:29:12.215 "name": "Nvme7", 00:29:12.215 "trtype": "tcp", 00:29:12.215 "traddr": "10.0.0.2", 00:29:12.215 "adrfam": "ipv4", 00:29:12.215 "trsvcid": "4420", 00:29:12.215 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:12.215 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:12.215 "hdgst": false, 00:29:12.215 "ddgst": false 00:29:12.215 }, 00:29:12.215 "method": "bdev_nvme_attach_controller" 00:29:12.215 },{ 00:29:12.215 "params": { 00:29:12.215 "name": "Nvme8", 00:29:12.215 "trtype": "tcp", 00:29:12.215 "traddr": "10.0.0.2", 00:29:12.215 "adrfam": "ipv4", 00:29:12.215 "trsvcid": "4420", 00:29:12.215 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:12.215 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:12.215 "hdgst": false, 00:29:12.215 "ddgst": false 00:29:12.215 }, 00:29:12.215 "method": "bdev_nvme_attach_controller" 00:29:12.215 },{ 00:29:12.215 "params": { 00:29:12.215 "name": "Nvme9", 00:29:12.215 "trtype": "tcp", 00:29:12.215 "traddr": "10.0.0.2", 00:29:12.215 "adrfam": "ipv4", 00:29:12.215 "trsvcid": "4420", 00:29:12.215 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:12.215 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:12.215 "hdgst": false, 00:29:12.215 "ddgst": false 00:29:12.215 }, 00:29:12.215 "method": "bdev_nvme_attach_controller" 00:29:12.215 },{ 00:29:12.215 "params": { 00:29:12.215 "name": "Nvme10", 00:29:12.215 "trtype": "tcp", 00:29:12.215 "traddr": "10.0.0.2", 00:29:12.215 "adrfam": "ipv4", 00:29:12.215 "trsvcid": "4420", 00:29:12.215 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:12.215 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:12.215 "hdgst": false, 00:29:12.215 "ddgst": false 00:29:12.215 }, 00:29:12.215 "method": "bdev_nvme_attach_controller" 00:29:12.215 }' 00:29:12.215 [2024-11-17 02:49:20.659137] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:12.215 [2024-11-17 02:49:20.659268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3054088 ] 00:29:12.474 [2024-11-17 02:49:20.798895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.474 [2024-11-17 02:49:20.929113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.478 Running I/O for 1 seconds... 00:29:15.303 1443.00 IOPS, 90.19 MiB/s 00:29:15.303 Latency(us) 00:29:15.303 [2024-11-17T01:49:23.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.303 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.303 Verification LBA range: start 0x0 length 0x400 00:29:15.303 Nvme1n1 : 1.09 181.49 11.34 0.00 0.00 342591.87 6407.96 310689.19 00:29:15.303 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.303 Verification LBA range: start 0x0 length 0x400 00:29:15.303 Nvme2n1 : 1.21 210.98 13.19 0.00 0.00 295422.10 21748.24 293601.28 00:29:15.303 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.303 Verification LBA range: start 0x0 length 0x400 00:29:15.303 Nvme3n1 : 1.20 216.40 13.52 0.00 0.00 281610.25 6844.87 298261.62 00:29:15.303 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.303 Verification LBA range: start 0x0 length 0x400 00:29:15.303 Nvme4n1 : 1.20 215.00 13.44 0.00 0.00 279163.26 4344.79 299815.06 00:29:15.303 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.303 Verification LBA range: start 0x0 length 0x400 00:29:15.303 Nvme5n1 : 1.14 168.05 10.50 0.00 0.00 350830.49 22622.06 301368.51 00:29:15.303 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.303 Verification LBA range: start 0x0 length 0x400 00:29:15.303 Nvme6n1 : 1.23 208.52 13.03 0.00 0.00 277489.02 10388.67 296708.17 00:29:15.303 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.303 Verification LBA range: start 0x0 length 0x400 00:29:15.303 Nvme7n1 : 1.22 209.52 13.10 0.00 0.00 272983.04 22913.33 323116.75 00:29:15.303 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.303 Verification LBA range: start 0x0 length 0x400 00:29:15.303 Nvme8n1 : 1.23 207.74 12.98 0.00 0.00 269312.19 17961.72 282727.16 00:29:15.303 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.303 Verification LBA range: start 0x0 length 0x400 00:29:15.303 Nvme9n1 : 1.19 161.55 10.10 0.00 0.00 340136.64 22816.24 335544.32 00:29:15.303 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.303 Verification LBA range: start 0x0 length 0x400 00:29:15.303 Nvme10n1 : 1.24 206.70 12.92 0.00 0.00 261782.66 5776.88 310689.19 00:29:15.303 [2024-11-17T01:49:23.763Z] =================================================================================================================== 00:29:15.303 [2024-11-17T01:49:23.763Z] Total : 1985.95 124.12 0.00 0.00 293356.06 4344.79 335544.32 00:29:16.237 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:16.237 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:16.237 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:16.237 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:16.237 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:16.237 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:16.237 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:16.237 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:16.237 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:16.237 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:16.237 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:16.237 rmmod nvme_tcp 00:29:16.237 rmmod nvme_fabrics 00:29:16.496 rmmod nvme_keyring 00:29:16.496 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:16.496 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:16.496 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:16.496 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3053340 ']' 00:29:16.496 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3053340 00:29:16.496 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3053340 ']' 00:29:16.496 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3053340 00:29:16.496 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:29:16.496 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:16.496 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3053340 00:29:16.496 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:16.496 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:16.496 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3053340' 00:29:16.496 killing process with pid 3053340 00:29:16.496 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3053340 00:29:16.496 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3053340 00:29:19.777 02:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:19.777 02:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:19.777 02:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:19.777 02:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:19.777 02:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:29:19.777 02:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:19.777 02:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:29:19.777 02:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:19.777 02:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:19.777 02:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.777 02:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.777 02:49:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.153 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:21.153 00:29:21.153 real 0m17.259s 00:29:21.153 user 0m56.094s 00:29:21.153 sys 0m3.850s 00:29:21.153 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:21.153 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:21.153 ************************************ 00:29:21.153 END TEST nvmf_shutdown_tc1 00:29:21.153 ************************************ 00:29:21.153 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:21.153 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:21.153 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:21.153 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:21.412 ************************************ 00:29:21.412 START TEST nvmf_shutdown_tc2 00:29:21.412 ************************************ 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:21.412 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:21.413 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:21.413 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:21.413 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:21.413 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:21.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:21.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:29:21.413 00:29:21.413 --- 10.0.0.2 ping statistics --- 00:29:21.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.413 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:21.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:21.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:29:21.413 00:29:21.413 --- 10.0.0.1 ping statistics --- 00:29:21.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.413 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.413 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3055248 00:29:21.414 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:21.414 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3055248 00:29:21.414 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3055248 ']' 00:29:21.414 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.414 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:21.414 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.414 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:21.414 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.672 [2024-11-17 02:49:29.892149] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:21.672 [2024-11-17 02:49:29.892301] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:21.672 [2024-11-17 02:49:30.047598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:21.929 [2024-11-17 02:49:30.175807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:21.929 [2024-11-17 02:49:30.175886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:21.929 [2024-11-17 02:49:30.175909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:21.930 [2024-11-17 02:49:30.175929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:21.930 [2024-11-17 02:49:30.175945] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:21.930 [2024-11-17 02:49:30.178636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:21.930 [2024-11-17 02:49:30.178703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:21.930 [2024-11-17 02:49:30.178748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:21.930 [2024-11-17 02:49:30.178749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:22.496 [2024-11-17 02:49:30.887613] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.496 02:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:22.755 Malloc1 00:29:22.755 [2024-11-17 02:49:31.053430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:22.755 Malloc2 00:29:23.012 Malloc3 00:29:23.012 Malloc4 00:29:23.012 Malloc5 00:29:23.270 Malloc6 00:29:23.270 Malloc7 00:29:23.528 Malloc8 00:29:23.528 Malloc9 00:29:23.528 Malloc10 00:29:23.528 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.528 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:23.528 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:23.528 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.528 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3055558 00:29:23.528 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3055558 /var/tmp/bdevperf.sock 00:29:23.528 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3055558 ']' 00:29:23.528 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:23.528 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:23.528 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:23.528 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:23.529 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:29:23.529 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:23.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:23.529 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:29:23.529 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:23.529 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:23.529 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.529 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:23.529 { 00:29:23.529 "params": { 00:29:23.529 "name": "Nvme$subsystem", 00:29:23.529 "trtype": "$TEST_TRANSPORT", 00:29:23.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.529 "adrfam": "ipv4", 00:29:23.529 "trsvcid": "$NVMF_PORT", 00:29:23.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.529 "hdgst": ${hdgst:-false}, 00:29:23.529 "ddgst": ${ddgst:-false} 00:29:23.529 }, 00:29:23.529 "method": "bdev_nvme_attach_controller" 00:29:23.529 } 00:29:23.529 EOF 00:29:23.529 )") 00:29:23.529 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:23.529 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:23.529 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:23.529 { 00:29:23.529 "params": { 00:29:23.529 "name": "Nvme$subsystem", 00:29:23.529 "trtype": "$TEST_TRANSPORT", 00:29:23.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.529 "adrfam": "ipv4", 00:29:23.529 "trsvcid": "$NVMF_PORT", 00:29:23.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.529 "hdgst": ${hdgst:-false}, 00:29:23.529 "ddgst": ${ddgst:-false} 00:29:23.529 }, 00:29:23.529 "method": "bdev_nvme_attach_controller" 00:29:23.529 } 00:29:23.529 EOF 00:29:23.529 )") 00:29:23.529 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:23.529 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:23.529 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:23.529 { 00:29:23.529 "params": { 00:29:23.529 "name": "Nvme$subsystem", 00:29:23.529 "trtype": "$TEST_TRANSPORT", 00:29:23.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.529 "adrfam": "ipv4", 00:29:23.529 "trsvcid": "$NVMF_PORT", 00:29:23.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.529 "hdgst": ${hdgst:-false}, 00:29:23.529 "ddgst": ${ddgst:-false} 00:29:23.529 }, 00:29:23.529 "method": "bdev_nvme_attach_controller" 00:29:23.529 } 00:29:23.529 EOF 00:29:23.529 )") 00:29:23.787 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:23.787 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:23.787 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:23.787 { 00:29:23.787 "params": { 00:29:23.787 "name": "Nvme$subsystem", 00:29:23.787 "trtype": "$TEST_TRANSPORT", 00:29:23.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.787 "adrfam": "ipv4", 00:29:23.787 "trsvcid": "$NVMF_PORT", 00:29:23.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.787 "hdgst": ${hdgst:-false}, 00:29:23.787 "ddgst": ${ddgst:-false} 00:29:23.787 }, 00:29:23.787 "method": "bdev_nvme_attach_controller" 00:29:23.787 } 00:29:23.787 EOF 00:29:23.787 )") 00:29:23.787 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:23.787 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:23.787 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:23.787 { 00:29:23.787 "params": { 00:29:23.787 "name": "Nvme$subsystem", 00:29:23.787 "trtype": "$TEST_TRANSPORT", 00:29:23.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.787 "adrfam": "ipv4", 00:29:23.787 "trsvcid": "$NVMF_PORT", 00:29:23.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.787 "hdgst": ${hdgst:-false}, 00:29:23.787 "ddgst": ${ddgst:-false} 00:29:23.787 }, 00:29:23.787 "method": "bdev_nvme_attach_controller" 00:29:23.787 } 00:29:23.787 EOF 00:29:23.787 )") 00:29:23.787 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:23.787 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:23.787 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:23.787 { 00:29:23.787 "params": { 00:29:23.787 "name": "Nvme$subsystem", 00:29:23.787 "trtype": "$TEST_TRANSPORT", 00:29:23.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.787 "adrfam": "ipv4", 00:29:23.787 "trsvcid": "$NVMF_PORT", 00:29:23.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.787 "hdgst": ${hdgst:-false}, 00:29:23.787 "ddgst": ${ddgst:-false} 00:29:23.787 }, 00:29:23.787 "method": "bdev_nvme_attach_controller" 00:29:23.787 } 00:29:23.787 EOF 00:29:23.788 )") 00:29:23.788 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:23.788 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:23.788 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:23.788 { 00:29:23.788 "params": { 00:29:23.788 "name": "Nvme$subsystem", 00:29:23.788 "trtype": "$TEST_TRANSPORT", 00:29:23.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.788 "adrfam": "ipv4", 00:29:23.788 "trsvcid": "$NVMF_PORT", 00:29:23.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.788 "hdgst": ${hdgst:-false}, 00:29:23.788 "ddgst": ${ddgst:-false} 00:29:23.788 }, 00:29:23.788 "method": "bdev_nvme_attach_controller" 00:29:23.788 } 00:29:23.788 EOF 00:29:23.788 )") 00:29:23.788 02:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:23.788 02:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:23.788 02:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:23.788 { 00:29:23.788 "params": { 00:29:23.788 "name": "Nvme$subsystem", 00:29:23.788 "trtype": "$TEST_TRANSPORT", 00:29:23.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.788 "adrfam": "ipv4", 00:29:23.788 "trsvcid": "$NVMF_PORT", 00:29:23.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.788 "hdgst": ${hdgst:-false}, 00:29:23.788 "ddgst": ${ddgst:-false} 00:29:23.788 }, 00:29:23.788 "method": "bdev_nvme_attach_controller" 00:29:23.788 } 00:29:23.788 EOF 00:29:23.788 )") 00:29:23.788 02:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:23.788 02:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:23.788 02:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:23.788 { 00:29:23.788 "params": { 00:29:23.788 "name": "Nvme$subsystem", 00:29:23.788 "trtype": "$TEST_TRANSPORT", 00:29:23.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.788 "adrfam": "ipv4", 00:29:23.788 "trsvcid": "$NVMF_PORT", 00:29:23.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.788 "hdgst": ${hdgst:-false}, 00:29:23.788 "ddgst": ${ddgst:-false} 00:29:23.788 }, 00:29:23.788 "method": "bdev_nvme_attach_controller" 00:29:23.788 } 00:29:23.788 EOF 00:29:23.788 )") 00:29:23.788 02:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:23.788 02:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:23.788 02:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:23.788 { 00:29:23.788 "params": { 00:29:23.788 "name": "Nvme$subsystem", 00:29:23.788 "trtype": "$TEST_TRANSPORT", 00:29:23.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.788 "adrfam": "ipv4", 00:29:23.788 "trsvcid": "$NVMF_PORT", 00:29:23.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.788 "hdgst": ${hdgst:-false}, 00:29:23.788 "ddgst": ${ddgst:-false} 00:29:23.788 }, 00:29:23.788 "method": "bdev_nvme_attach_controller" 00:29:23.788 } 00:29:23.788 EOF 00:29:23.788 )") 00:29:23.788 02:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:23.788 02:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:29:23.788 02:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:29:23.788 02:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:23.788 "params": { 00:29:23.788 "name": "Nvme1", 00:29:23.788 "trtype": "tcp", 00:29:23.788 "traddr": "10.0.0.2", 00:29:23.788 "adrfam": "ipv4", 00:29:23.788 "trsvcid": "4420", 00:29:23.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:23.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:23.788 "hdgst": false, 00:29:23.788 "ddgst": false 00:29:23.788 }, 00:29:23.788 "method": "bdev_nvme_attach_controller" 00:29:23.788 },{ 00:29:23.788 "params": { 00:29:23.788 "name": "Nvme2", 00:29:23.788 "trtype": "tcp", 00:29:23.788 "traddr": "10.0.0.2", 00:29:23.788 "adrfam": "ipv4", 00:29:23.788 "trsvcid": "4420", 00:29:23.788 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:23.788 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:23.788 "hdgst": false, 00:29:23.788 "ddgst": false 00:29:23.788 }, 00:29:23.788 "method": "bdev_nvme_attach_controller" 00:29:23.788 },{ 00:29:23.788 "params": { 00:29:23.788 "name": "Nvme3", 00:29:23.788 "trtype": "tcp", 00:29:23.788 "traddr": "10.0.0.2", 00:29:23.788 "adrfam": "ipv4", 00:29:23.788 "trsvcid": "4420", 00:29:23.788 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:23.788 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:23.788 "hdgst": false, 00:29:23.788 "ddgst": false 00:29:23.788 }, 00:29:23.788 "method": "bdev_nvme_attach_controller" 00:29:23.788 },{ 00:29:23.788 "params": { 00:29:23.788 "name": "Nvme4", 00:29:23.788 "trtype": "tcp", 00:29:23.788 "traddr": "10.0.0.2", 00:29:23.788 "adrfam": "ipv4", 00:29:23.788 "trsvcid": "4420", 00:29:23.788 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:23.788 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:23.788 "hdgst": false, 00:29:23.788 "ddgst": false 00:29:23.788 }, 00:29:23.788 "method": "bdev_nvme_attach_controller" 00:29:23.788 },{ 00:29:23.788 "params": { 00:29:23.788 "name": "Nvme5", 00:29:23.788 "trtype": "tcp", 00:29:23.788 "traddr": "10.0.0.2", 00:29:23.788 "adrfam": "ipv4", 00:29:23.788 "trsvcid": "4420", 00:29:23.788 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:23.788 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:23.788 "hdgst": false, 00:29:23.788 "ddgst": false 00:29:23.788 }, 00:29:23.788 "method": "bdev_nvme_attach_controller" 00:29:23.788 },{ 00:29:23.788 "params": { 00:29:23.788 "name": "Nvme6", 00:29:23.788 "trtype": "tcp", 00:29:23.788 "traddr": "10.0.0.2", 00:29:23.788 "adrfam": "ipv4", 00:29:23.788 "trsvcid": "4420", 00:29:23.788 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:23.788 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:23.788 "hdgst": false, 00:29:23.788 "ddgst": false 00:29:23.788 }, 00:29:23.788 "method": "bdev_nvme_attach_controller" 00:29:23.788 },{ 00:29:23.788 "params": { 00:29:23.788 "name": "Nvme7", 00:29:23.788 "trtype": "tcp", 00:29:23.788 "traddr": "10.0.0.2", 00:29:23.788 "adrfam": "ipv4", 00:29:23.788 "trsvcid": "4420", 00:29:23.788 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:23.788 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:23.788 "hdgst": false, 00:29:23.788 "ddgst": false 00:29:23.788 }, 00:29:23.788 "method": "bdev_nvme_attach_controller" 00:29:23.788 },{ 00:29:23.788 "params": { 00:29:23.788 "name": "Nvme8", 00:29:23.788 "trtype": "tcp", 00:29:23.788 "traddr": "10.0.0.2", 00:29:23.788 "adrfam": "ipv4", 00:29:23.788 "trsvcid": "4420", 00:29:23.788 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:23.788 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:23.788 "hdgst": false, 00:29:23.788 "ddgst": false 00:29:23.788 }, 00:29:23.788 "method": "bdev_nvme_attach_controller" 00:29:23.788 },{ 00:29:23.788 "params": { 00:29:23.788 "name": "Nvme9", 00:29:23.788 "trtype": "tcp", 00:29:23.788 "traddr": "10.0.0.2", 00:29:23.788 "adrfam": "ipv4", 00:29:23.788 "trsvcid": "4420", 00:29:23.788 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:23.788 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:23.788 "hdgst": false, 00:29:23.788 "ddgst": false 00:29:23.788 }, 00:29:23.788 "method": "bdev_nvme_attach_controller" 00:29:23.788 },{ 00:29:23.788 "params": { 00:29:23.788 "name": "Nvme10", 00:29:23.788 "trtype": "tcp", 00:29:23.788 "traddr": "10.0.0.2", 00:29:23.788 "adrfam": "ipv4", 00:29:23.788 "trsvcid": "4420", 00:29:23.788 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:23.788 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:23.788 "hdgst": false, 00:29:23.788 "ddgst": false 00:29:23.788 }, 00:29:23.788 "method": "bdev_nvme_attach_controller" 00:29:23.788 }' 00:29:23.788 [2024-11-17 02:49:32.064611] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:23.788 [2024-11-17 02:49:32.064737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3055558 ] 00:29:23.788 [2024-11-17 02:49:32.205918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.047 [2024-11-17 02:49:32.334114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.574 Running I/O for 10 seconds... 00:29:26.574 02:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:26.574 02:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:26.574 02:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:26.574 02:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.574 02:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.574 02:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.574 02:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:26.574 02:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:26.574 02:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:26.574 02:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:26.574 02:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:26.574 02:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:26.574 02:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:26.574 02:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:26.574 02:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:26.574 02:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.574 02:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.574 02:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.574 02:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:26.574 02:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:26.574 02:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:26.832 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:26.832 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:26.832 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:26.832 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:26.832 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.832 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.832 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.832 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=74 00:29:26.832 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 74 -ge 100 ']' 00:29:26.832 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:27.090 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:27.090 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:27.090 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:27.090 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:27.090 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.090 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.090 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.090 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:27.090 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:27.090 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:27.090 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:27.090 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:27.090 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3055558 00:29:27.090 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3055558 ']' 00:29:27.090 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3055558 00:29:27.090 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:27.090 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:27.090 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3055558 00:29:27.348 1485.00 IOPS, 92.81 MiB/s [2024-11-17T01:49:35.808Z] 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:27.348 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:27.348 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3055558' 00:29:27.348 killing process with pid 3055558 00:29:27.348 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3055558 00:29:27.348 02:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3055558 00:29:27.348 Received shutdown signal, test time was about 1.161442 seconds 00:29:27.348 00:29:27.348 Latency(us) 00:29:27.348 [2024-11-17T01:49:35.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.348 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.348 Verification LBA range: start 0x0 length 0x400 00:29:27.348 Nvme1n1 : 1.16 220.58 13.79 0.00 0.00 287056.97 21651.15 324670.20 00:29:27.348 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.348 Verification LBA range: start 0x0 length 0x400 00:29:27.348 Nvme2n1 : 1.10 174.80 10.93 0.00 0.00 355630.65 23981.32 307582.29 00:29:27.348 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.349 Verification LBA range: start 0x0 length 0x400 00:29:27.349 Nvme3n1 : 1.14 227.63 14.23 0.00 0.00 267708.96 5606.97 299815.06 00:29:27.349 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.349 Verification LBA range: start 0x0 length 0x400 00:29:27.349 Nvme4n1 : 1.15 223.31 13.96 0.00 0.00 268764.16 23592.96 298261.62 00:29:27.349 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.349 Verification LBA range: start 0x0 length 0x400 00:29:27.349 Nvme5n1 : 1.11 172.87 10.80 0.00 0.00 339800.37 24758.04 306028.85 00:29:27.349 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.349 Verification LBA range: start 0x0 length 0x400 00:29:27.349 Nvme6n1 : 1.15 222.40 13.90 0.00 0.00 259847.96 39418.69 310689.19 00:29:27.349 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.349 Verification LBA range: start 0x0 length 0x400 00:29:27.349 Nvme7n1 : 1.09 176.53 11.03 0.00 0.00 319424.79 20291.89 293601.28 00:29:27.349 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.349 Verification LBA range: start 0x0 length 0x400 00:29:27.349 Nvme8n1 : 1.13 170.60 10.66 0.00 0.00 325420.63 25631.86 324670.20 00:29:27.349 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.349 Verification LBA range: start 0x0 length 0x400 00:29:27.349 Nvme9n1 : 1.14 169.14 10.57 0.00 0.00 322323.09 24660.95 344865.00 00:29:27.349 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.349 Verification LBA range: start 0x0 length 0x400 00:29:27.349 Nvme10n1 : 1.12 171.96 10.75 0.00 0.00 309344.84 21942.42 307582.29 00:29:27.349 [2024-11-17T01:49:35.809Z] =================================================================================================================== 00:29:27.349 [2024-11-17T01:49:35.809Z] Total : 1929.82 120.61 0.00 0.00 301389.42 5606.97 344865.00 00:29:28.282 02:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:29.215 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3055248 00:29:29.215 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:29.215 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:29.215 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:29.215 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:29.215 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:29.215 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:29.215 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:29.215 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:29.215 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:29.215 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:29.215 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:29.215 rmmod nvme_tcp 00:29:29.215 rmmod nvme_fabrics 00:29:29.215 rmmod nvme_keyring 00:29:29.473 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:29.473 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:29.473 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:29.473 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3055248 ']' 00:29:29.473 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3055248 00:29:29.473 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3055248 ']' 00:29:29.473 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3055248 00:29:29.473 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:29.473 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:29.473 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3055248 00:29:29.473 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:29.473 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:29.473 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3055248' 00:29:29.473 killing process with pid 3055248 00:29:29.473 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3055248 00:29:29.473 02:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3055248 00:29:32.002 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:32.002 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:32.002 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:32.002 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:32.002 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:29:32.002 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:32.002 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:29:32.002 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:32.002 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:32.002 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.002 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.002 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:34.538 00:29:34.538 real 0m12.877s 00:29:34.538 user 0m43.968s 00:29:34.538 sys 0m2.198s 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.538 ************************************ 00:29:34.538 END TEST nvmf_shutdown_tc2 00:29:34.538 ************************************ 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:34.538 ************************************ 00:29:34.538 START TEST nvmf_shutdown_tc3 00:29:34.538 ************************************ 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:34.538 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.538 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:34.539 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:34.539 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:34.539 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:34.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:34.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:29:34.539 00:29:34.539 --- 10.0.0.2 ping statistics --- 00:29:34.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.539 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:34.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:34.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:29:34.539 00:29:34.539 --- 10.0.0.1 ping statistics --- 00:29:34.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.539 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3056915 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3056915 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3056915 ']' 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.539 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:34.539 [2024-11-17 02:49:42.830000] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:34.539 [2024-11-17 02:49:42.830182] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.539 [2024-11-17 02:49:42.988530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:34.798 [2024-11-17 02:49:43.132049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.798 [2024-11-17 02:49:43.132174] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.798 [2024-11-17 02:49:43.132196] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.798 [2024-11-17 02:49:43.132215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.798 [2024-11-17 02:49:43.132232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.798 [2024-11-17 02:49:43.135121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.798 [2024-11-17 02:49:43.135245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.798 [2024-11-17 02:49:43.135285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.798 [2024-11-17 02:49:43.135290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:35.364 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.364 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:35.364 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:35.364 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:35.364 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:35.364 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:35.364 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:35.364 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.364 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:35.364 [2024-11-17 02:49:43.818911] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.622 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.622 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:35.622 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:35.622 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:35.622 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:35.622 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:35.622 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.622 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:35.622 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.622 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:35.622 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.622 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:35.622 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.622 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:35.622 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.622 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:35.622 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.623 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:35.623 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.623 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:35.623 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.623 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:35.623 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.623 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:35.623 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:35.623 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:35.623 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:35.623 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.623 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:35.623 Malloc1 00:29:35.623 [2024-11-17 02:49:43.961560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.623 Malloc2 00:29:35.881 Malloc3 00:29:35.881 Malloc4 00:29:35.881 Malloc5 00:29:36.138 Malloc6 00:29:36.138 Malloc7 00:29:36.397 Malloc8 00:29:36.397 Malloc9 00:29:36.397 Malloc10 00:29:36.655 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.655 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:36.655 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:36.655 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:36.655 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3057201 00:29:36.655 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3057201 /var/tmp/bdevperf.sock 00:29:36.655 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3057201 ']' 00:29:36.655 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:36.655 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:36.655 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:36.655 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:36.655 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:36.655 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:36.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:36.655 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:36.655 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:36.655 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:36.655 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:36.655 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:36.655 { 00:29:36.655 "params": { 00:29:36.655 "name": "Nvme$subsystem", 00:29:36.655 "trtype": "$TEST_TRANSPORT", 00:29:36.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.656 "adrfam": "ipv4", 00:29:36.656 "trsvcid": "$NVMF_PORT", 00:29:36.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.656 "hdgst": ${hdgst:-false}, 00:29:36.656 "ddgst": ${ddgst:-false} 00:29:36.656 }, 00:29:36.656 "method": "bdev_nvme_attach_controller" 00:29:36.656 } 00:29:36.656 EOF 00:29:36.656 )") 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:36.656 { 00:29:36.656 "params": { 00:29:36.656 "name": "Nvme$subsystem", 00:29:36.656 "trtype": "$TEST_TRANSPORT", 00:29:36.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.656 "adrfam": "ipv4", 00:29:36.656 "trsvcid": "$NVMF_PORT", 00:29:36.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.656 "hdgst": ${hdgst:-false}, 00:29:36.656 "ddgst": ${ddgst:-false} 00:29:36.656 }, 00:29:36.656 "method": "bdev_nvme_attach_controller" 00:29:36.656 } 00:29:36.656 EOF 00:29:36.656 )") 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:36.656 { 00:29:36.656 "params": { 00:29:36.656 "name": "Nvme$subsystem", 00:29:36.656 "trtype": "$TEST_TRANSPORT", 00:29:36.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.656 "adrfam": "ipv4", 00:29:36.656 "trsvcid": "$NVMF_PORT", 00:29:36.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.656 "hdgst": ${hdgst:-false}, 00:29:36.656 "ddgst": ${ddgst:-false} 00:29:36.656 }, 00:29:36.656 "method": "bdev_nvme_attach_controller" 00:29:36.656 } 00:29:36.656 EOF 00:29:36.656 )") 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:36.656 { 00:29:36.656 "params": { 00:29:36.656 "name": "Nvme$subsystem", 00:29:36.656 "trtype": "$TEST_TRANSPORT", 00:29:36.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.656 "adrfam": "ipv4", 00:29:36.656 "trsvcid": "$NVMF_PORT", 00:29:36.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.656 "hdgst": ${hdgst:-false}, 00:29:36.656 "ddgst": ${ddgst:-false} 00:29:36.656 }, 00:29:36.656 "method": "bdev_nvme_attach_controller" 00:29:36.656 } 00:29:36.656 EOF 00:29:36.656 )") 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:36.656 { 00:29:36.656 "params": { 00:29:36.656 "name": "Nvme$subsystem", 00:29:36.656 "trtype": "$TEST_TRANSPORT", 00:29:36.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.656 "adrfam": "ipv4", 00:29:36.656 "trsvcid": "$NVMF_PORT", 00:29:36.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.656 "hdgst": ${hdgst:-false}, 00:29:36.656 "ddgst": ${ddgst:-false} 00:29:36.656 }, 00:29:36.656 "method": "bdev_nvme_attach_controller" 00:29:36.656 } 00:29:36.656 EOF 00:29:36.656 )") 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:36.656 { 00:29:36.656 "params": { 00:29:36.656 "name": "Nvme$subsystem", 00:29:36.656 "trtype": "$TEST_TRANSPORT", 00:29:36.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.656 "adrfam": "ipv4", 00:29:36.656 "trsvcid": "$NVMF_PORT", 00:29:36.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.656 "hdgst": ${hdgst:-false}, 00:29:36.656 "ddgst": ${ddgst:-false} 00:29:36.656 }, 00:29:36.656 "method": "bdev_nvme_attach_controller" 00:29:36.656 } 00:29:36.656 EOF 00:29:36.656 )") 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:36.656 { 00:29:36.656 "params": { 00:29:36.656 "name": "Nvme$subsystem", 00:29:36.656 "trtype": "$TEST_TRANSPORT", 00:29:36.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.656 "adrfam": "ipv4", 00:29:36.656 "trsvcid": "$NVMF_PORT", 00:29:36.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.656 "hdgst": ${hdgst:-false}, 00:29:36.656 "ddgst": ${ddgst:-false} 00:29:36.656 }, 00:29:36.656 "method": "bdev_nvme_attach_controller" 00:29:36.656 } 00:29:36.656 EOF 00:29:36.656 )") 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:36.656 { 00:29:36.656 "params": { 00:29:36.656 "name": "Nvme$subsystem", 00:29:36.656 "trtype": "$TEST_TRANSPORT", 00:29:36.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.656 "adrfam": "ipv4", 00:29:36.656 "trsvcid": "$NVMF_PORT", 00:29:36.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.656 "hdgst": ${hdgst:-false}, 00:29:36.656 "ddgst": ${ddgst:-false} 00:29:36.656 }, 00:29:36.656 "method": "bdev_nvme_attach_controller" 00:29:36.656 } 00:29:36.656 EOF 00:29:36.656 )") 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:36.656 { 00:29:36.656 "params": { 00:29:36.656 "name": "Nvme$subsystem", 00:29:36.656 "trtype": "$TEST_TRANSPORT", 00:29:36.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.656 "adrfam": "ipv4", 00:29:36.656 "trsvcid": "$NVMF_PORT", 00:29:36.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.656 "hdgst": ${hdgst:-false}, 00:29:36.656 "ddgst": ${ddgst:-false} 00:29:36.656 }, 00:29:36.656 "method": "bdev_nvme_attach_controller" 00:29:36.656 } 00:29:36.656 EOF 00:29:36.656 )") 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:36.656 { 00:29:36.656 "params": { 00:29:36.656 "name": "Nvme$subsystem", 00:29:36.656 "trtype": "$TEST_TRANSPORT", 00:29:36.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.656 "adrfam": "ipv4", 00:29:36.656 "trsvcid": "$NVMF_PORT", 00:29:36.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.656 "hdgst": ${hdgst:-false}, 00:29:36.656 "ddgst": ${ddgst:-false} 00:29:36.656 }, 00:29:36.656 "method": "bdev_nvme_attach_controller" 00:29:36.656 } 00:29:36.656 EOF 00:29:36.656 )") 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:36.656 02:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:36.656 "params": { 00:29:36.656 "name": "Nvme1", 00:29:36.656 "trtype": "tcp", 00:29:36.656 "traddr": "10.0.0.2", 00:29:36.656 "adrfam": "ipv4", 00:29:36.656 "trsvcid": "4420", 00:29:36.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:36.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:36.656 "hdgst": false, 00:29:36.656 "ddgst": false 00:29:36.656 }, 00:29:36.656 "method": "bdev_nvme_attach_controller" 00:29:36.656 },{ 00:29:36.656 "params": { 00:29:36.656 "name": "Nvme2", 00:29:36.656 "trtype": "tcp", 00:29:36.656 "traddr": "10.0.0.2", 00:29:36.656 "adrfam": "ipv4", 00:29:36.656 "trsvcid": "4420", 00:29:36.656 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:36.657 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:36.657 "hdgst": false, 00:29:36.657 "ddgst": false 00:29:36.657 }, 00:29:36.657 "method": "bdev_nvme_attach_controller" 00:29:36.657 },{ 00:29:36.657 "params": { 00:29:36.657 "name": "Nvme3", 00:29:36.657 "trtype": "tcp", 00:29:36.657 "traddr": "10.0.0.2", 00:29:36.657 "adrfam": "ipv4", 00:29:36.657 "trsvcid": "4420", 00:29:36.657 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:36.657 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:36.657 "hdgst": false, 00:29:36.657 "ddgst": false 00:29:36.657 }, 00:29:36.657 "method": "bdev_nvme_attach_controller" 00:29:36.657 },{ 00:29:36.657 "params": { 00:29:36.657 "name": "Nvme4", 00:29:36.657 "trtype": "tcp", 00:29:36.657 "traddr": "10.0.0.2", 00:29:36.657 "adrfam": "ipv4", 00:29:36.657 "trsvcid": "4420", 00:29:36.657 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:36.657 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:36.657 "hdgst": false, 00:29:36.657 "ddgst": false 00:29:36.657 }, 00:29:36.657 "method": "bdev_nvme_attach_controller" 00:29:36.657 },{ 00:29:36.657 "params": { 00:29:36.657 "name": "Nvme5", 00:29:36.657 "trtype": "tcp", 00:29:36.657 "traddr": "10.0.0.2", 00:29:36.657 "adrfam": "ipv4", 00:29:36.657 "trsvcid": "4420", 00:29:36.657 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:36.657 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:36.657 "hdgst": false, 00:29:36.657 "ddgst": false 00:29:36.657 }, 00:29:36.657 "method": "bdev_nvme_attach_controller" 00:29:36.657 },{ 00:29:36.657 "params": { 00:29:36.657 "name": "Nvme6", 00:29:36.657 "trtype": "tcp", 00:29:36.657 "traddr": "10.0.0.2", 00:29:36.657 "adrfam": "ipv4", 00:29:36.657 "trsvcid": "4420", 00:29:36.657 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:36.657 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:36.657 "hdgst": false, 00:29:36.657 "ddgst": false 00:29:36.657 }, 00:29:36.657 "method": "bdev_nvme_attach_controller" 00:29:36.657 },{ 00:29:36.657 "params": { 00:29:36.657 "name": "Nvme7", 00:29:36.657 "trtype": "tcp", 00:29:36.657 "traddr": "10.0.0.2", 00:29:36.657 "adrfam": "ipv4", 00:29:36.657 "trsvcid": "4420", 00:29:36.657 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:36.657 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:36.657 "hdgst": false, 00:29:36.657 "ddgst": false 00:29:36.657 }, 00:29:36.657 "method": "bdev_nvme_attach_controller" 00:29:36.657 },{ 00:29:36.657 "params": { 00:29:36.657 "name": "Nvme8", 00:29:36.657 "trtype": "tcp", 00:29:36.657 "traddr": "10.0.0.2", 00:29:36.657 "adrfam": "ipv4", 00:29:36.657 "trsvcid": "4420", 00:29:36.657 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:36.657 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:36.657 "hdgst": false, 00:29:36.657 "ddgst": false 00:29:36.657 }, 00:29:36.657 "method": "bdev_nvme_attach_controller" 00:29:36.657 },{ 00:29:36.657 "params": { 00:29:36.657 "name": "Nvme9", 00:29:36.657 "trtype": "tcp", 00:29:36.657 "traddr": "10.0.0.2", 00:29:36.657 "adrfam": "ipv4", 00:29:36.657 "trsvcid": "4420", 00:29:36.657 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:36.657 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:36.657 "hdgst": false, 00:29:36.657 "ddgst": false 00:29:36.657 }, 00:29:36.657 "method": "bdev_nvme_attach_controller" 00:29:36.657 },{ 00:29:36.657 "params": { 00:29:36.657 "name": "Nvme10", 00:29:36.657 "trtype": "tcp", 00:29:36.657 "traddr": "10.0.0.2", 00:29:36.657 "adrfam": "ipv4", 00:29:36.657 "trsvcid": "4420", 00:29:36.657 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:36.657 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:36.657 "hdgst": false, 00:29:36.657 "ddgst": false 00:29:36.657 }, 00:29:36.657 "method": "bdev_nvme_attach_controller" 00:29:36.657 }' 00:29:36.657 [2024-11-17 02:49:44.992314] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:36.657 [2024-11-17 02:49:44.992468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3057201 ] 00:29:36.915 [2024-11-17 02:49:45.141210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.915 [2024-11-17 02:49:45.269216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.815 Running I/O for 10 seconds... 00:29:39.381 02:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:39.381 02:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:39.381 02:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:39.381 02:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.381 02:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:39.381 02:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.381 02:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:39.381 02:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:39.381 02:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:39.381 02:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:39.381 02:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:39.381 02:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:39.381 02:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:39.381 02:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:39.381 02:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:39.381 02:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:39.381 02:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.381 02:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:39.381 02:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.381 02:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:39.381 02:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:39.381 02:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:39.639 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:39.639 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:39.639 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:39.639 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:39.639 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.639 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:39.639 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.639 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=129 00:29:39.639 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 129 -ge 100 ']' 00:29:39.639 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:39.639 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:39.639 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:39.639 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3056915 00:29:39.639 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3056915 ']' 00:29:39.639 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3056915 00:29:39.639 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:39.914 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:39.914 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3056915 00:29:39.914 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:39.914 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:39.914 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3056915' 00:29:39.914 killing process with pid 3056915 00:29:39.914 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3056915 00:29:39.914 02:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3056915 00:29:39.914 [2024-11-17 02:49:48.134462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.134991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.914 [2024-11-17 02:49:48.135724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.135741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.135759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.138980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.139458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.140648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.915 [2024-11-17 02:49:48.140714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.915 [2024-11-17 02:49:48.140744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.915 [2024-11-17 02:49:48.140773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.915 [2024-11-17 02:49:48.140797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.915 [2024-11-17 02:49:48.140819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.915 [2024-11-17 02:49:48.140842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.915 [2024-11-17 02:49:48.140863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.915 [2024-11-17 02:49:48.140883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.142002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.142068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.142134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.142162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.142180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.142199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.142219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.142238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.142257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.142275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.915 [2024-11-17 02:49:48.142294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.142999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.143018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.143036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.143055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.143073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.143128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.143149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.143168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.143187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.143205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.143223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.143242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.143261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.143279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.143297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.143315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.143335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.149987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.916 [2024-11-17 02:49:48.150009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.150941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.151198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.917 [2024-11-17 02:49:48.151245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.917 [2024-11-17 02:49:48.151274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.917 [2024-11-17 02:49:48.151296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.917 [2024-11-17 02:49:48.151318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.917 [2024-11-17 02:49:48.151340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.917 [2024-11-17 02:49:48.151362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.917 [2024-11-17 02:49:48.151396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.917 [2024-11-17 02:49:48.151416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.151544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.917 [2024-11-17 02:49:48.151576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.917 [2024-11-17 02:49:48.151600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.917 [2024-11-17 02:49:48.151622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.917 [2024-11-17 02:49:48.151643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.917 [2024-11-17 02:49:48.151664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.917 [2024-11-17 02:49:48.151686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.917 [2024-11-17 02:49:48.151706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.917 [2024-11-17 02:49:48.151726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.151794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:39.917 [2024-11-17 02:49:48.151875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.917 [2024-11-17 02:49:48.151933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.917 [2024-11-17 02:49:48.151961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.917 [2024-11-17 02:49:48.151983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.917 [2024-11-17 02:49:48.152005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.917 [2024-11-17 02:49:48.152026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.917 [2024-11-17 02:49:48.152053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.917 [2024-11-17 02:49:48.152075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.917 [2024-11-17 02:49:48.152116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:39.917 [2024-11-17 02:49:48.152995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.917 [2024-11-17 02:49:48.153031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.917 [2024-11-17 02:49:48.153083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.917 [2024-11-17 02:49:48.153124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.917 [2024-11-17 02:49:48.153152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.917 [2024-11-17 02:49:48.153174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.917 [2024-11-17 02:49:48.153199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.917 [2024-11-17 02:49:48.153221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.917 [2024-11-17 02:49:48.153245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.917 [2024-11-17 02:49:48.153267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.917 [2024-11-17 02:49:48.153293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.917 [2024-11-17 02:49:48.153315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.917 [2024-11-17 02:49:48.153339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.917 [2024-11-17 02:49:48.153362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.917 [2024-11-17 02:49:48.153387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.917 [2024-11-17 02:49:48.153419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.153444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.153466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.153491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.153513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.153538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.153561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.153592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.153615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.153649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.153671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.153696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.153718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.153743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.153765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.153790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.153812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.153837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.153859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.153884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.153907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.153932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.153954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.153978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.154001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.154025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.154048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.154074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.154100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.154139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.154161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.154186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.154213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.154239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.154261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.154286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.154308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.154333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.154355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.154359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.918 [2024-11-17 02:49:48.154390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.154413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.154412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.918 [2024-11-17 02:49:48.154436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.918 [2024-11-17 02:49:48.154438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.154455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.918 [2024-11-17 02:49:48.154460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.154473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.918 [2024-11-17 02:49:48.154485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.154491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.918 [2024-11-17 02:49:48.154507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.154510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.918 [2024-11-17 02:49:48.154530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.918 [2024-11-17 02:49:48.154532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.154548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.918 [2024-11-17 02:49:48.154554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.154567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.918 [2024-11-17 02:49:48.154579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.154592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.918 [2024-11-17 02:49:48.154601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.154613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.918 [2024-11-17 02:49:48.154626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.918 [2024-11-17 02:49:48.154632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.918 [2024-11-17 02:49:48.154648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.918 [2024-11-17 02:49:48.154650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.918 [2024-11-17 02:49:48.154670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.154673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.919 [2024-11-17 02:49:48.154688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.154695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.919 [2024-11-17 02:49:48.154707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.154720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.919 [2024-11-17 02:49:48.154726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.154742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.919 [2024-11-17 02:49:48.154745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.154764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.154767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.919 [2024-11-17 02:49:48.154783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.154788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.919 [2024-11-17 02:49:48.154802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.154813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.919 [2024-11-17 02:49:48.154822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.154835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.919 [2024-11-17 02:49:48.154842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.154860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.919 [2024-11-17 02:49:48.154865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.154883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.919 [2024-11-17 02:49:48.154885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.154904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.154908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.919 [2024-11-17 02:49:48.154924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.154929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.919 [2024-11-17 02:49:48.154942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.154954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.919 [2024-11-17 02:49:48.154961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.154976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.919 [2024-11-17 02:49:48.154979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.154998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.919 [2024-11-17 02:49:48.155017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.919 [2024-11-17 02:49:48.155037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.919 [2024-11-17 02:49:48.155057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.919 [2024-11-17 02:49:48.155076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.919 [2024-11-17 02:49:48.155118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-17 02:49:48.155139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.919 with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.919 [2024-11-17 02:49:48.155165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.919 [2024-11-17 02:49:48.155207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.919 [2024-11-17 02:49:48.155227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.919 [2024-11-17 02:49:48.155246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same [2024-11-17 02:49:48.155264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:12with the state(6) to be set 00:29:39.919 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.919 [2024-11-17 02:49:48.155286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same [2024-11-17 02:49:48.155288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:39.919 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.919 [2024-11-17 02:49:48.155308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.919 [2024-11-17 02:49:48.155327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.919 [2024-11-17 02:49:48.155346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.919 [2024-11-17 02:49:48.155393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.919 [2024-11-17 02:49:48.155413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.919 [2024-11-17 02:49:48.155432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-17 02:49:48.155451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.919 with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.919 [2024-11-17 02:49:48.155495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.919 [2024-11-17 02:49:48.155515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.919 [2024-11-17 02:49:48.155533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.919 [2024-11-17 02:49:48.155551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-17 02:49:48.155552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.919 with the state(6) to be set 00:29:39.920 [2024-11-17 02:49:48.155573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.920 [2024-11-17 02:49:48.155578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.155591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.920 [2024-11-17 02:49:48.155599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.155610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.920 [2024-11-17 02:49:48.155624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.155628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.920 [2024-11-17 02:49:48.155646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-17 02:49:48.155647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 with the state(6) to be set 00:29:39.920 [2024-11-17 02:49:48.155668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:39.920 [2024-11-17 02:49:48.155673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.155695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.155736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.155759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.155783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.155804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.155832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.155853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.155877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.155897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.155921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.155942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.155965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.155986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.156024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.156046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.156070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.156126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.156152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.156174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.156197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.156219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.156241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb600 is same with the state(6) to be set 00:29:39.920 [2024-11-17 02:49:48.156662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.156691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.156721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.156743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.156769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.156790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.156814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.156836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.156865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.156888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.156912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.156934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.156958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.156979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.157003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.157024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.157048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.157069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.157131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.157156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.157181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.157203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.157228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.157251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.157263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-11-17 02:49:48.157275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:12with the state(6) to be set 00:29:39.920 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.157302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.157306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.920 [2024-11-17 02:49:48.157326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.157330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.920 [2024-11-17 02:49:48.157349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.157350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.920 [2024-11-17 02:49:48.157382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.920 [2024-11-17 02:49:48.157385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.157407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.920 [2024-11-17 02:49:48.157411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.157455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.920 [2024-11-17 02:49:48.157462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 [2024-11-17 02:49:48.157474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.920 [2024-11-17 02:49:48.157484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.157492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.920 [2024-11-17 02:49:48.157508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:12[2024-11-17 02:49:48.157511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.920 with the state(6) to be set 00:29:39.920 [2024-11-17 02:49:48.157532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-11-17 02:49:48.157532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:39.920 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.920 [2024-11-17 02:49:48.157553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.157559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.921 [2024-11-17 02:49:48.157571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.157581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.921 [2024-11-17 02:49:48.157590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.157605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.921 [2024-11-17 02:49:48.157609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.157626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-17 02:49:48.157627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.921 with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.157648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.157652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.921 [2024-11-17 02:49:48.157666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.157674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.921 [2024-11-17 02:49:48.157685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.157698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.921 [2024-11-17 02:49:48.157711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.157719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.921 [2024-11-17 02:49:48.157731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.157743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.921 [2024-11-17 02:49:48.157749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.157764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.921 [2024-11-17 02:49:48.157768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.157787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.157788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.921 [2024-11-17 02:49:48.157806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.157817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.921 [2024-11-17 02:49:48.157825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.157843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-11-17 02:49:48.157842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:12with the state(6) to be set 00:29:39.921 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.921 [2024-11-17 02:49:48.157864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-11-17 02:49:48.157866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:39.921 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.921 [2024-11-17 02:49:48.157884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.157890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.921 [2024-11-17 02:49:48.157903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.157912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.921 [2024-11-17 02:49:48.157921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.157935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.921 [2024-11-17 02:49:48.157940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.157956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.921 [2024-11-17 02:49:48.157959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.157980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:12[2024-11-17 02:49:48.157982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.921 with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.158004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-17 02:49:48.158004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.921 with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.158025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.158029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.921 [2024-11-17 02:49:48.158044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.158051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.921 [2024-11-17 02:49:48.158063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.158085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.921 [2024-11-17 02:49:48.158090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.158130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.921 [2024-11-17 02:49:48.158138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.158159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-11-17 02:49:48.158158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:12with the state(6) to be set 00:29:39.921 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.921 [2024-11-17 02:49:48.158180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-11-17 02:49:48.158182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:39.921 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.921 [2024-11-17 02:49:48.158201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.158208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.921 [2024-11-17 02:49:48.158220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.158230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.921 [2024-11-17 02:49:48.158240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.158255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.921 [2024-11-17 02:49:48.158260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.158277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.921 [2024-11-17 02:49:48.158279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.158303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.158306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.921 [2024-11-17 02:49:48.158322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.158329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.921 [2024-11-17 02:49:48.158341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.158354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.921 [2024-11-17 02:49:48.158360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.158387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.921 [2024-11-17 02:49:48.158389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.158424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.158428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.921 [2024-11-17 02:49:48.158443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.921 [2024-11-17 02:49:48.158450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.921 [2024-11-17 02:49:48.158462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.922 [2024-11-17 02:49:48.158474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.158481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.922 [2024-11-17 02:49:48.158495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.158499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.922 [2024-11-17 02:49:48.158517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.922 [2024-11-17 02:49:48.158518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.158535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.922 [2024-11-17 02:49:48.158540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.158553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.922 [2024-11-17 02:49:48.158564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.158572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.922 [2024-11-17 02:49:48.158585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.158595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.922 [2024-11-17 02:49:48.158609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.158614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:39.922 [2024-11-17 02:49:48.158635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.158660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.158681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.158742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.158765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.158789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.158811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.158836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.158858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.158881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.158903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.158927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.158949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.158974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.159011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.159036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.159057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.159092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.159137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.159163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.159184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.159213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.159236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.159261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.159282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.159306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.159327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.159352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.159374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.159410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.159431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.159460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.159497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.159522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.159543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.159566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.159587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.159610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.159632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.159655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.159677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.159700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.159734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.159760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.159782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.159806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.159832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.159857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.159878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.159902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.922 [2024-11-17 02:49:48.159923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.922 [2024-11-17 02:49:48.159944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9f80 is same with the state(6) to be set 00:29:39.922 [2024-11-17 02:49:48.160320] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:39.922 [2024-11-17 02:49:48.161106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.922 [2024-11-17 02:49:48.161145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.922 [2024-11-17 02:49:48.161168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.922 [2024-11-17 02:49:48.161187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.922 [2024-11-17 02:49:48.161265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.922 [2024-11-17 02:49:48.161285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.922 [2024-11-17 02:49:48.161304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.922 [2024-11-17 02:49:48.161324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.922 [2024-11-17 02:49:48.161343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.922 [2024-11-17 02:49:48.161361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.922 [2024-11-17 02:49:48.161391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.922 [2024-11-17 02:49:48.161409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.161995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.162013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.162031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.162050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.162073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.162107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.162128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.162149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.162168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.162186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.162205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.162224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.162242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.162260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.162264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:39.923 [2024-11-17 02:49:48.162279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.162297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.162316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.162334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.162353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.162349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:39.923 [2024-11-17 02:49:48.162371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.162395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.162452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.923 [2024-11-17 02:49:48.162481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.923 [2024-11-17 02:49:48.162528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.923 [2024-11-17 02:49:48.162552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.923 [2024-11-17 02:49:48.162573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.923 [2024-11-17 02:49:48.162594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.923 [2024-11-17 02:49:48.162615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.923 [2024-11-17 02:49:48.162641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.923 [2024-11-17 02:49:48.162660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:39.923 [2024-11-17 02:49:48.162734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:39.923 [2024-11-17 02:49:48.162809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.923 [2024-11-17 02:49:48.162836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.923 [2024-11-17 02:49:48.162859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.923 [2024-11-17 02:49:48.162881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.923 [2024-11-17 02:49:48.162902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.923 [2024-11-17 02:49:48.162922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.923 [2024-11-17 02:49:48.162943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.924 [2024-11-17 02:49:48.162963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.924 [2024-11-17 02:49:48.162982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.163025] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:39.924 [2024-11-17 02:49:48.163060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:39.924 [2024-11-17 02:49:48.165443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.165483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.165515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.165539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.165558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.165578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.165600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.165636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.165669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.165690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.165719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.165747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.165777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.165808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.165833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.165852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.165871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.165891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.165924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.165947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.165966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.165995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting contro[2024-11-17 02:49:48.166274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same ller 00:29:39.924 with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.166978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.167001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.167030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.167062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.167106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.167134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.167165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.167186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.167858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.924 [2024-11-17 02:49:48.167921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7f00 with addr=10.0.0.2, port=4420 00:29:39.924 [2024-11-17 02:49:48.167949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.168091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.924 [2024-11-17 02:49:48.168141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:39.924 [2024-11-17 02:49:48.168165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.168786] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:39.924 [2024-11-17 02:49:48.169881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:39.924 [2024-11-17 02:49:48.169937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:39.924 [2024-11-17 02:49:48.169973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.170010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.170031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.170050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.924 [2024-11-17 02:49:48.170068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170189] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:39.925 [2024-11-17 02:49:48.170205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same [2024-11-17 02:49:48.170277] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:39.925 with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same [2024-11-17 02:49:48.170923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in erwith the state(6) to be set 00:29:39.925 ror state 00:29:39.925 [2024-11-17 02:49:48.170957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:39.925 [2024-11-17 02:49:48.170977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.170987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:39.925 [2024-11-17 02:49:48.170997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.171013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resettin[2024-11-17 02:49:48.171016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same g controller failed. 00:29:39.925 with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.171036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.171038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:39.925 [2024-11-17 02:49:48.171055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same [2024-11-17 02:49:48.171057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] cowith the state(6) to be set 00:29:39.925 ntroller reinitialization failed 00:29:39.925 [2024-11-17 02:49:48.171076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same [2024-11-17 02:49:48.171088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:39.925 with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.171119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting[2024-11-17 02:49:48.171119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same controller failed. 00:29:39.925 with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.171142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.171161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.171180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.171199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.171218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.171236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.171255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.171273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.171283] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:39.925 [2024-11-17 02:49:48.171291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.171312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.171335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.172159] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:39.925 [2024-11-17 02:49:48.172356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:39.925 [2024-11-17 02:49:48.172455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.925 [2024-11-17 02:49:48.172489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.925 [2024-11-17 02:49:48.172513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.925 [2024-11-17 02:49:48.172535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.925 [2024-11-17 02:49:48.172556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.925 [2024-11-17 02:49:48.172577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.925 [2024-11-17 02:49:48.172598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.925 [2024-11-17 02:49:48.172619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.925 [2024-11-17 02:49:48.172639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:39.925 [2024-11-17 02:49:48.172712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.925 [2024-11-17 02:49:48.172741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.172764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.926 [2024-11-17 02:49:48.172785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.172807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.926 [2024-11-17 02:49:48.172827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.172848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.926 [2024-11-17 02:49:48.172869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.172889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:29:39.926 [2024-11-17 02:49:48.172946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:39.926 [2024-11-17 02:49:48.173031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.926 [2024-11-17 02:49:48.173062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.173104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.926 [2024-11-17 02:49:48.173129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.173157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.926 [2024-11-17 02:49:48.173179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.173201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.926 [2024-11-17 02:49:48.173222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.173242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is same with the state(6) to be set 00:29:39.926 [2024-11-17 02:49:48.173564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.173598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.173642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.173665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.173696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.173719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.173743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.173765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.173790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.173812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.173837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.173860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.173885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.173908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.173932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.173955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.173980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.174002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.174027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.174049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.174102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.174128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.174154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.174176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.174201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.174224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.174248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.174271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.174296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.174319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.174344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.174366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.174411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.174433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.174457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.174479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.174505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.174528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.174551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.174573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.174598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.174620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.174643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.174681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.174707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.174734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.174760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.174782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.174807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.174828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.174853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.174875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.174898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.174920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.174944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.174966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.174989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.175011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.175036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.175058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.926 [2024-11-17 02:49:48.175112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.926 [2024-11-17 02:49:48.175138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.175163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.175185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.175210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.175231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.175255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.175277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.175302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.175323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.175353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.175375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.175400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.175421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.175462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.175483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.175508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.175528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.175552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.175573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.175597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.175618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.175642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.175663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.175686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.175708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.175732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.175753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.175776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.175797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.175821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.175843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.175866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.175887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.175911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.175936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.175961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.175982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.176006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.176027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.176051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.176072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.176122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.176145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.176169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.176192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.176216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.176238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.176262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.176283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.176308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.176330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.176354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.176376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.176400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.176422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.176447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.176468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.176493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.176514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.176543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.176566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.176606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.176628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.176652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.176674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.176697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.176718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.927 [2024-11-17 02:49:48.176738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa200 is same with the state(6) to be set 00:29:39.927 [2024-11-17 02:49:48.178310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.927 [2024-11-17 02:49:48.178344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.178376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.178399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.178424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.178447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.178472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.178495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.178520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.178543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.178569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.178592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.178634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.178656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.178681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.178703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.178727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.178754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.178780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.178801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.178826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.178848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.178873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.178895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.178919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.178940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.178965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.178986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.179011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.179032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.179057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.179078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.179125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.179150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.179178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.179200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.179225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.179248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.179272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.179295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.179320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.179342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.179387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.179411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.179436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.179459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.179484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.179506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.179530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.179552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.179577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.179614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.179639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.179661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.179685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.179706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.179730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.179752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.179776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.179798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.179822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.179844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.179867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.179888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.179913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.179934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.179958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.179983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.180008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.180031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.180055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.180092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.180141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.180164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.180201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.180223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.180248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.180270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.180294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.928 [2024-11-17 02:49:48.180316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.928 [2024-11-17 02:49:48.180341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.180363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.180388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.180410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.180435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.180457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.180481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.180514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.180539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.180560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.180585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.180608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.180637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.180659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.180684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.180706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.180731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.180752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.180777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.180798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.180823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.180845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.180870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.180892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.180916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.180938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.180963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.180984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.181008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.181029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.181053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.181075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.181108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.181133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.181157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.181179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.181204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.181231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.181257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.181278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.181302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.181324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.181349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.181371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.181395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.181416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.181441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.181463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.181485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa480 is same with the state(6) to be set 00:29:39.929 [2024-11-17 02:49:48.183179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.183215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.183272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.183299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.183328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.183350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.183376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.183414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.183441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.183463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.183488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.183510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.183536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.183563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.183588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.183611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.183635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.183658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.183683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.183704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.183729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.183751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.183776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.183798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.183823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.183845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.183869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.183891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.183916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.929 [2024-11-17 02:49:48.183938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.929 [2024-11-17 02:49:48.183963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.183985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.184010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.184032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.184057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.184079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.184133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.184159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.184195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.184219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.184245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.184317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.184345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.184369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.184395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.184418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.184443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.184465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.184491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.184514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.184539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.184561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.184587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.184609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.184635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.184657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.184683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.184705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.184731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.184753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.184778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.184800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.184826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.184853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.184880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.184903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.184930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.184952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.184977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.184999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.185024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.185045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.185071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.185092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.185126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.185148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.185173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.185195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.185219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.185242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.185266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.185288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.185313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.185335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.185359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.185381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.185406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.185428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.185457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.185479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.185504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.185526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.185551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.185572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.185597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.185619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.185644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.185665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.185690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.185712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.185737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.185759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.185784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.185805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.185829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.185851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.185876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.930 [2024-11-17 02:49:48.185898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.930 [2024-11-17 02:49:48.185923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.185944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.185970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.185992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.186017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.186042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.186068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.186090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.186124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.186147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.186172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.186193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.186216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb380 is same with the state(6) to be set 00:29:39.931 [2024-11-17 02:49:48.186531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:39.931 [2024-11-17 02:49:48.186574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:39.931 [2024-11-17 02:49:48.186730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:39.931 [2024-11-17 02:49:48.186785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:39.931 [2024-11-17 02:49:48.186865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:29:39.931 [2024-11-17 02:49:48.186966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.931 [2024-11-17 02:49:48.186997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.187022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.931 [2024-11-17 02:49:48.187043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.187065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.931 [2024-11-17 02:49:48.187086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.187120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.931 [2024-11-17 02:49:48.187142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.187162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:39.931 [2024-11-17 02:49:48.188646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:39.931 [2024-11-17 02:49:48.188698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:39.931 [2024-11-17 02:49:48.188936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.931 [2024-11-17 02:49:48.188978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:29:39.931 [2024-11-17 02:49:48.189008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:39.931 [2024-11-17 02:49:48.189127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.931 [2024-11-17 02:49:48.189164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3900 with addr=10.0.0.2, port=4420 00:29:39.931 [2024-11-17 02:49:48.189188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:39.931 [2024-11-17 02:49:48.190315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.190350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.190404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.190431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.190459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.190482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.190508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.190531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.190558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.190581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.190606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.190628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.190653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.190675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.190700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.190722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.190747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.190769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.190794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.190816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.190841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.190863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.190893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.190917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.190941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.190963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.191004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.191028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.191053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.191076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.191107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.191131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.191156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.191178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.191203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.191225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.191250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.191273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.191296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.191319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.191343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.191365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.191389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.191412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.931 [2024-11-17 02:49:48.191437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.931 [2024-11-17 02:49:48.191459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.191484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.191513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.191539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.191561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.191586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.191608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.191633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.191655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.191679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.191701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.191726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.191748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.191772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.191795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.191820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.191842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.191867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.191889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.191914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.191936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.191960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.191983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.192007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.192029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.192054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.192076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.192113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.192138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.192164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.192186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.192210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.192232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.192256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.192278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.192303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.192326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.192351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.192373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.192397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.192419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.192443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.192465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.192489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.192512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.192536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.192559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.192583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.192605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.192630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.192652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.192677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.192703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.192728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.192751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.192776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.192798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.192823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.192845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.192869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.192892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.192917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.192939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.192963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.192985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.193009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.193031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.193056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.932 [2024-11-17 02:49:48.193078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.932 [2024-11-17 02:49:48.193108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.193132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.193157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.193180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.193205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.193227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.193252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.193274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.193305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.193328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.193352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.193374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.193399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.193421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.193444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa700 is same with the state(6) to be set 00:29:39.933 [2024-11-17 02:49:48.195080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.195141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.195177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.195200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.195225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.195247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.195272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.195295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.195319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.195342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.195366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.195388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.195413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.195435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.195459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.195481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.195505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.195528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.195557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.195580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.195604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.195626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.195650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.195673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.195698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.195737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.195764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.195786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.195811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.195833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.195873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.195894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.195919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.195941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.195965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.195986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.196009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.196030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.196055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.196076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.196123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.196149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.196174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.196200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.196226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.196248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.196272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.196293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.196318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.196340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.196365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.196387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.196426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.196449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.196473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.196495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.196519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.196540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.196564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.196585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.196609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.196630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.196653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.933 [2024-11-17 02:49:48.196674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.933 [2024-11-17 02:49:48.196698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.196719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.196743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.196765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.196792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.196814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.196838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.196859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.196882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.196904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.196927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.196949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.196973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.196994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.197018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.197039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.197062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.197084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.197130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.197154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.197180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.197202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.197227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.197249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.197273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.197296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.197321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.197343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.197367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.197393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.197436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.197457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.197481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.197502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.197526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.197548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.197571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.197593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.197617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.197639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.197662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.197684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.197708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.197729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.197752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.197774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.197797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.197819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.197842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.197864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.197887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.197909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.197933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.197954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.197978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.198003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.198028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.198050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.198073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.198119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.198145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.198168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.198193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-11-17 02:49:48.198215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.934 [2024-11-17 02:49:48.198237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa980 is same with the state(6) to be set 00:29:39.934 [2024-11-17 02:49:48.199855] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:39.934 [2024-11-17 02:49:48.199962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:39.934 [2024-11-17 02:49:48.200005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:39.934 [2024-11-17 02:49:48.200033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:39.934 [2024-11-17 02:49:48.200076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:39.934 [2024-11-17 02:49:48.200246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:39.934 [2024-11-17 02:49:48.200288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:39.934 [2024-11-17 02:49:48.200427] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:39.934 [2024-11-17 02:49:48.200465] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:39.934 [2024-11-17 02:49:48.201648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.934 [2024-11-17 02:49:48.201700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:29:39.934 [2024-11-17 02:49:48.201726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:39.934 [2024-11-17 02:49:48.201846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.934 [2024-11-17 02:49:48.201882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:39.934 [2024-11-17 02:49:48.201905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:39.934 [2024-11-17 02:49:48.202034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.934 [2024-11-17 02:49:48.202068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7f00 with addr=10.0.0.2, port=4420 00:29:39.935 [2024-11-17 02:49:48.202105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:39.935 [2024-11-17 02:49:48.202239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.935 [2024-11-17 02:49:48.202273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:39.935 [2024-11-17 02:49:48.202306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:39.935 [2024-11-17 02:49:48.202413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.935 [2024-11-17 02:49:48.202447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4d00 with addr=10.0.0.2, port=4420 00:29:39.935 [2024-11-17 02:49:48.202470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:39.935 [2024-11-17 02:49:48.202506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:39.935 [2024-11-17 02:49:48.202527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:39.935 [2024-11-17 02:49:48.202550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:39.935 [2024-11-17 02:49:48.202574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:39.935 [2024-11-17 02:49:48.202607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:39.935 [2024-11-17 02:49:48.202627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:39.935 [2024-11-17 02:49:48.202645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:39.935 [2024-11-17 02:49:48.202663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:39.935 [2024-11-17 02:49:48.203791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.203824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.203866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.203892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.203919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.203942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.203967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.203989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.204014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.204036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.204061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.204083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.204140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.204166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.204191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.204214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.204239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.204262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.204287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.204309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.204334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.204356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.204380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.204403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.204444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.204465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.204490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.204513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.204536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.204558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.204581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.204602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.204626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.204647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.204671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.204692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.204715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.204741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.204766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.204787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.204811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.204833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.204857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.204878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.204901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.204922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.204946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.204967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.204990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.205011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.205034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.205055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.205079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.205122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.205150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.205172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.205197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.205219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.205244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.205266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.205290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.205320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.935 [2024-11-17 02:49:48.205350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-11-17 02:49:48.205372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.205396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.205432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.205457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.205478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.205502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.205523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.205546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.205567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.205591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.205617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.205640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.205662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.205685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.205706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.205730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.205751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.205775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.205796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.205819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.205841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.205864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.205885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.205908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.205933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.205958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.205979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.206002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.206024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.206047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.206068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.206116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.206139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.206163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.206185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.206208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.206231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.206256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.206278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.206311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.206333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.206357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.206379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.206419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.206441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.206465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.206486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.206510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.206531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.206559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.206580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.206611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.206632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.206656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.206677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.206701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.206738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.206777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.206800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.206825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.206846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.206870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.206892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.206916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.206938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.206959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fac00 is same with the state(6) to be set 00:29:39.936 [2024-11-17 02:49:48.208507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.208539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.208570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.936 [2024-11-17 02:49:48.208593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.936 [2024-11-17 02:49:48.208627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.208649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.208673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.208695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.208724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.208747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.208771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.208792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.208815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.208837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.208860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.208881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.208905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.208926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.208949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.208971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.208994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.209015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.209039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.209060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.209084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.209130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.209157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.209179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.209203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.209225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.209249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.209271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.209305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.209331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.209356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.209378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.209403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.209440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.209465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.209486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.209510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.209532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.209555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.209576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.209610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.209630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.209654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.209675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.209699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.209720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.209744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.209765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.209789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.209810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.209834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.209855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.209878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.209899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.209923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.209948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.209973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.209994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.210018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.210039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.210062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.210106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.210135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.210157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.210181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.210203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.210227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.210249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.210273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.210295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.210319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.210340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.210365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.210386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.210427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.210448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.210472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.210493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.937 [2024-11-17 02:49:48.210516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.937 [2024-11-17 02:49:48.210537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.210568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.210590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.210613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.210634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.210658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.210680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.210703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.210724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.210748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.210769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.210793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.210814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.210838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.210859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.210883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.210904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.210927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.210949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.210971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.210993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.211016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.211037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.211061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.211112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.211142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.211169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.211194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.211216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.211241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.211263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.211287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.211309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.211332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.211355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.211379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.211429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.211456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.211478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.211501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.211522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.211545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.211566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.211590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.211611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.211633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fae80 is same with the state(6) to be set 00:29:39.938 [2024-11-17 02:49:48.213151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.213182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.213216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.213240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.213265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.213293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.213319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.213342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.213367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.213389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.213414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.213436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.213460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.213483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.213508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.213530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.213554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.213577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.213602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.213625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.213649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.213671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.213696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.213718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.213743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.213766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.213790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.213812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.213837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.213859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.213887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.938 [2024-11-17 02:49:48.213910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.938 [2024-11-17 02:49:48.213934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.213956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.213981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.214003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.214027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.214048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.214073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.214101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.214129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.214152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.214176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.214198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.214223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.214244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.214268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.214290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.214314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.214336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.214361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.214383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.214423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.214444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.214468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.214493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.214517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.214538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.214561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.214582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.214606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.214628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.214651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.214672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.214695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.214717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.214741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.214762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.214786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.214807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.214830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.214851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.214875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.214896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.214919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.214939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.214963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.214984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.215007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.215027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.215055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.215092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.215125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.215148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.215172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.215193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.215217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.215238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.215262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.215284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.215308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.215330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.215354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.215375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.215414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.215435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.215459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.215480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.215510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.215531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.215554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.215575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.215598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.215619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.215642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.215667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.215692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.215713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.215736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.215757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.939 [2024-11-17 02:49:48.215780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.939 [2024-11-17 02:49:48.215801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.940 [2024-11-17 02:49:48.215834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.940 [2024-11-17 02:49:48.215855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.940 [2024-11-17 02:49:48.215878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.940 [2024-11-17 02:49:48.215900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.940 [2024-11-17 02:49:48.215923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.940 [2024-11-17 02:49:48.215944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.940 [2024-11-17 02:49:48.215982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.940 [2024-11-17 02:49:48.216004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.940 [2024-11-17 02:49:48.216027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.940 [2024-11-17 02:49:48.216048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.940 [2024-11-17 02:49:48.216087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.940 [2024-11-17 02:49:48.216119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.940 [2024-11-17 02:49:48.216146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.940 [2024-11-17 02:49:48.216168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.940 [2024-11-17 02:49:48.216191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.940 [2024-11-17 02:49:48.216213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.940 [2024-11-17 02:49:48.216234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb100 is same with the state(6) to be set 00:29:39.940 [2024-11-17 02:49:48.221035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:39.940 [2024-11-17 02:49:48.221089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:39.940 task offset: 8704 on job bdev=Nvme10n1 fails 00:29:39.940 00:29:39.940 Latency(us) 00:29:39.940 [2024-11-17T01:49:48.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.940 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:39.940 Job: Nvme1n1 ended in about 0.90 seconds with error 00:29:39.940 Verification LBA range: start 0x0 length 0x400 00:29:39.940 Nvme1n1 : 0.90 141.54 8.85 70.77 0.00 297839.38 24855.13 293601.28 00:29:39.940 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:39.940 Job: Nvme2n1 ended in about 0.92 seconds with error 00:29:39.940 Verification LBA range: start 0x0 length 0x400 00:29:39.940 Nvme2n1 : 0.92 139.44 8.72 69.72 0.00 295723.61 38059.43 276513.37 00:29:39.940 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:39.940 Job: Nvme3n1 ended in about 0.92 seconds with error 00:29:39.940 Verification LBA range: start 0x0 length 0x400 00:29:39.940 Nvme3n1 : 0.92 138.73 8.67 69.37 0.00 290643.06 21554.06 301368.51 00:29:39.940 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:39.940 Job: Nvme4n1 ended in about 0.93 seconds with error 00:29:39.940 Verification LBA range: start 0x0 length 0x400 00:29:39.940 Nvme4n1 : 0.93 141.24 8.83 68.48 0.00 282104.99 22816.24 293601.28 00:29:39.940 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:39.940 Job: Nvme5n1 ended in about 0.94 seconds with error 00:29:39.940 Verification LBA range: start 0x0 length 0x400 00:29:39.940 Nvme5n1 : 0.94 136.26 8.52 68.13 0.00 282968.87 21554.06 302921.96 00:29:39.940 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:39.940 Job: Nvme6n1 ended in about 0.95 seconds with error 00:29:39.940 Verification LBA range: start 0x0 length 0x400 00:29:39.940 Nvme6n1 : 0.95 135.01 8.44 67.51 0.00 279380.32 22816.24 309135.74 00:29:39.940 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:39.940 Job: Nvme7n1 ended in about 0.95 seconds with error 00:29:39.940 Verification LBA range: start 0x0 length 0x400 00:29:39.940 Nvme7n1 : 0.95 134.36 8.40 67.18 0.00 274413.80 32622.36 309135.74 00:29:39.940 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:39.940 Job: Nvme8n1 ended in about 0.96 seconds with error 00:29:39.940 Verification LBA range: start 0x0 length 0x400 00:29:39.940 Nvme8n1 : 0.96 133.71 8.36 66.86 0.00 269381.53 24466.77 285834.05 00:29:39.940 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:39.940 Job: Nvme9n1 ended in about 0.93 seconds with error 00:29:39.940 Verification LBA range: start 0x0 length 0x400 00:29:39.940 Nvme9n1 : 0.93 142.16 8.89 64.62 0.00 252808.66 15340.28 310689.19 00:29:39.940 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:39.940 Job: Nvme10n1 ended in about 0.90 seconds with error 00:29:39.940 Verification LBA range: start 0x0 length 0x400 00:29:39.940 Nvme10n1 : 0.90 75.37 4.71 70.94 0.00 346476.55 21068.61 332437.43 00:29:39.940 [2024-11-17T01:49:48.400Z] =================================================================================================================== 00:29:39.940 [2024-11-17T01:49:48.400Z] Total : 1317.84 82.36 683.57 0.00 285254.32 15340.28 332437.43 00:29:39.940 [2024-11-17 02:49:48.310440] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:39.940 [2024-11-17 02:49:48.310566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:39.940 [2024-11-17 02:49:48.310725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:39.940 [2024-11-17 02:49:48.310769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:39.940 [2024-11-17 02:49:48.310798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:39.940 [2024-11-17 02:49:48.310836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:39.940 [2024-11-17 02:49:48.310865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:39.940 [2024-11-17 02:49:48.310993] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:29:39.940 [2024-11-17 02:49:48.311030] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:39.940 [2024-11-17 02:49:48.311058] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:39.940 [2024-11-17 02:49:48.311085] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:39.940 [2024-11-17 02:49:48.311137] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:39.940 [2024-11-17 02:49:48.311925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.940 [2024-11-17 02:49:48.311976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5700 with addr=10.0.0.2, port=4420 00:29:39.940 [2024-11-17 02:49:48.312006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:39.940 [2024-11-17 02:49:48.312195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.940 [2024-11-17 02:49:48.312231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6100 with addr=10.0.0.2, port=4420 00:29:39.940 [2024-11-17 02:49:48.312254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:29:39.940 [2024-11-17 02:49:48.312394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.940 [2024-11-17 02:49:48.312430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6b00 with addr=10.0.0.2, port=4420 00:29:39.940 [2024-11-17 02:49:48.312454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is same with the state(6) to be set 00:29:39.940 [2024-11-17 02:49:48.312477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:39.940 [2024-11-17 02:49:48.312498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:39.940 [2024-11-17 02:49:48.312523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:39.940 [2024-11-17 02:49:48.312546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:39.940 [2024-11-17 02:49:48.312570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:39.940 [2024-11-17 02:49:48.312589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:39.940 [2024-11-17 02:49:48.312608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:39.940 [2024-11-17 02:49:48.312627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:39.940 [2024-11-17 02:49:48.312648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:39.940 [2024-11-17 02:49:48.312681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:39.940 [2024-11-17 02:49:48.312700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:39.940 [2024-11-17 02:49:48.312723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:39.940 [2024-11-17 02:49:48.312744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:39.940 [2024-11-17 02:49:48.312761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:39.941 [2024-11-17 02:49:48.312779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:39.941 [2024-11-17 02:49:48.312799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:39.941 [2024-11-17 02:49:48.312819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:39.941 [2024-11-17 02:49:48.312836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:39.941 [2024-11-17 02:49:48.312854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:39.941 [2024-11-17 02:49:48.312872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:39.941 [2024-11-17 02:49:48.312913] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:39.941 [2024-11-17 02:49:48.312945] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:39.941 [2024-11-17 02:49:48.314664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:39.941 [2024-11-17 02:49:48.314705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:39.941 [2024-11-17 02:49:48.314844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:39.941 [2024-11-17 02:49:48.314880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:39.941 [2024-11-17 02:49:48.314908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:29:39.941 [2024-11-17 02:49:48.315344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:39.941 [2024-11-17 02:49:48.315385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:39.941 [2024-11-17 02:49:48.315413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:39.941 [2024-11-17 02:49:48.315438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:39.941 [2024-11-17 02:49:48.315479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:39.941 [2024-11-17 02:49:48.315711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.941 [2024-11-17 02:49:48.315749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3900 with addr=10.0.0.2, port=4420 00:29:39.941 [2024-11-17 02:49:48.315773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:39.941 [2024-11-17 02:49:48.315885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.941 [2024-11-17 02:49:48.315919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:29:39.941 [2024-11-17 02:49:48.315942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:39.941 [2024-11-17 02:49:48.315965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:39.941 [2024-11-17 02:49:48.315984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:39.941 [2024-11-17 02:49:48.316011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:39.941 [2024-11-17 02:49:48.316033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:39.941 [2024-11-17 02:49:48.316055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:39.941 [2024-11-17 02:49:48.316073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:39.941 [2024-11-17 02:49:48.316092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:39.941 [2024-11-17 02:49:48.316123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:39.941 [2024-11-17 02:49:48.316145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:39.941 [2024-11-17 02:49:48.316164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:39.941 [2024-11-17 02:49:48.316183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:39.941 [2024-11-17 02:49:48.316202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:39.941 [2024-11-17 02:49:48.316415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.941 [2024-11-17 02:49:48.316451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4d00 with addr=10.0.0.2, port=4420 00:29:39.941 [2024-11-17 02:49:48.316474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:39.941 [2024-11-17 02:49:48.316595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.941 [2024-11-17 02:49:48.316629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:39.941 [2024-11-17 02:49:48.316651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:39.941 [2024-11-17 02:49:48.316777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.941 [2024-11-17 02:49:48.316810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7f00 with addr=10.0.0.2, port=4420 00:29:39.941 [2024-11-17 02:49:48.316833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:39.941 [2024-11-17 02:49:48.316968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.941 [2024-11-17 02:49:48.317003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:39.941 [2024-11-17 02:49:48.317027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:39.941 [2024-11-17 02:49:48.317147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.941 [2024-11-17 02:49:48.317182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:29:39.941 [2024-11-17 02:49:48.317206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:39.941 [2024-11-17 02:49:48.317234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:39.941 [2024-11-17 02:49:48.317263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:39.941 [2024-11-17 02:49:48.317351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:39.941 [2024-11-17 02:49:48.317387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:39.941 [2024-11-17 02:49:48.317439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:39.941 [2024-11-17 02:49:48.317468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:39.941 [2024-11-17 02:49:48.317495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:39.941 [2024-11-17 02:49:48.317518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:39.941 [2024-11-17 02:49:48.317538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:39.941 [2024-11-17 02:49:48.317558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:39.941 [2024-11-17 02:49:48.317577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:39.941 [2024-11-17 02:49:48.317598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:39.941 [2024-11-17 02:49:48.317617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:39.941 [2024-11-17 02:49:48.317636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:39.941 [2024-11-17 02:49:48.317654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:39.941 [2024-11-17 02:49:48.317716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:39.941 [2024-11-17 02:49:48.317742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:39.941 [2024-11-17 02:49:48.317763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:39.941 [2024-11-17 02:49:48.317782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:39.941 [2024-11-17 02:49:48.317804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:39.941 [2024-11-17 02:49:48.317823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:39.941 [2024-11-17 02:49:48.317842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:39.941 [2024-11-17 02:49:48.317861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:39.941 [2024-11-17 02:49:48.317881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:39.941 [2024-11-17 02:49:48.317901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:39.941 [2024-11-17 02:49:48.317920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:39.942 [2024-11-17 02:49:48.317938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:39.942 [2024-11-17 02:49:48.317957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:39.942 [2024-11-17 02:49:48.317975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:39.942 [2024-11-17 02:49:48.318008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:39.942 [2024-11-17 02:49:48.318028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:39.942 [2024-11-17 02:49:48.318050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:39.942 [2024-11-17 02:49:48.318072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:39.942 [2024-11-17 02:49:48.318119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:39.942 [2024-11-17 02:49:48.318140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:43.223 02:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:43.791 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3057201 00:29:43.791 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:43.791 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3057201 00:29:43.791 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:43.791 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:43.791 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:43.791 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:43.791 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3057201 00:29:43.791 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:43.791 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:43.791 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:43.791 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:43.791 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:43.791 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:43.791 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:43.791 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:43.791 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:43.791 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:43.791 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:43.791 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:43.791 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:43.792 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:43.792 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:43.792 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:43.792 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:43.792 rmmod nvme_tcp 00:29:43.792 rmmod nvme_fabrics 00:29:43.792 rmmod nvme_keyring 00:29:43.792 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:43.792 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:43.792 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:43.792 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3056915 ']' 00:29:43.792 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3056915 00:29:43.792 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3056915 ']' 00:29:43.792 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3056915 00:29:43.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3056915) - No such process 00:29:43.792 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3056915 is not found' 00:29:43.792 Process with pid 3056915 is not found 00:29:43.792 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:43.792 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:43.792 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:43.792 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:43.792 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:43.792 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:43.792 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:43.792 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:43.792 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:43.792 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.792 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:43.792 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.697 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:45.697 00:29:45.697 real 0m11.509s 00:29:45.697 user 0m34.041s 00:29:45.697 sys 0m1.992s 00:29:45.697 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:45.697 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:45.697 ************************************ 00:29:45.697 END TEST nvmf_shutdown_tc3 00:29:45.697 ************************************ 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:45.698 ************************************ 00:29:45.698 START TEST nvmf_shutdown_tc4 00:29:45.698 ************************************ 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:45.698 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:45.698 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:45.698 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:45.698 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:45.699 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:45.699 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:45.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:45.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:29:45.957 00:29:45.957 --- 10.0.0.2 ping statistics --- 00:29:45.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.957 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:45.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:45.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:29:45.957 00:29:45.957 --- 10.0.0.1 ping statistics --- 00:29:45.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.957 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3058488 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3058488 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3058488 ']' 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:45.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:45.957 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:45.957 [2024-11-17 02:49:54.361367] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:45.957 [2024-11-17 02:49:54.361527] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:46.216 [2024-11-17 02:49:54.512936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:46.216 [2024-11-17 02:49:54.654468] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:46.216 [2024-11-17 02:49:54.654558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:46.216 [2024-11-17 02:49:54.654584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:46.216 [2024-11-17 02:49:54.654608] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:46.216 [2024-11-17 02:49:54.654629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:46.216 [2024-11-17 02:49:54.657573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:46.216 [2024-11-17 02:49:54.657675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:46.216 [2024-11-17 02:49:54.657722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.216 [2024-11-17 02:49:54.657729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:47.149 [2024-11-17 02:49:55.385644] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:47.149 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:47.150 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:47.150 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:47.150 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:47.150 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:47.150 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:47.150 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:47.150 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:47.150 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:47.150 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:47.150 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:47.150 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:47.150 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:47.150 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.150 02:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:47.150 Malloc1 00:29:47.150 [2024-11-17 02:49:55.527427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.150 Malloc2 00:29:47.408 Malloc3 00:29:47.408 Malloc4 00:29:47.665 Malloc5 00:29:47.665 Malloc6 00:29:47.665 Malloc7 00:29:47.923 Malloc8 00:29:47.923 Malloc9 00:29:48.181 Malloc10 00:29:48.181 02:49:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.181 02:49:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:48.181 02:49:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:48.181 02:49:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:48.181 02:49:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3058776 00:29:48.181 02:49:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:48.181 02:49:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:48.181 [2024-11-17 02:49:56.583614] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:53.452 02:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:53.452 02:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3058488 00:29:53.452 02:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3058488 ']' 00:29:53.452 02:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3058488 00:29:53.452 02:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:53.452 02:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:53.452 02:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3058488 00:29:53.452 02:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:53.452 02:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:53.452 02:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3058488' 00:29:53.452 killing process with pid 3058488 00:29:53.452 02:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3058488 00:29:53.452 02:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3058488 00:29:53.452 Write completed with error (sct=0, sc=8) 00:29:53.452 Write completed with error (sct=0, sc=8) 00:29:53.452 Write completed with error (sct=0, sc=8) 00:29:53.452 Write completed with error (sct=0, sc=8) 00:29:53.452 starting I/O failed: -6 00:29:53.452 Write completed with error (sct=0, sc=8) 00:29:53.452 Write completed with error (sct=0, sc=8) 00:29:53.452 Write completed with error (sct=0, sc=8) 00:29:53.452 Write completed with error (sct=0, sc=8) 00:29:53.452 starting I/O failed: -6 00:29:53.452 Write completed with error (sct=0, sc=8) 00:29:53.452 Write completed with error (sct=0, sc=8) 00:29:53.452 Write completed with error (sct=0, sc=8) 00:29:53.452 Write completed with error (sct=0, sc=8) 00:29:53.452 starting I/O failed: -6 00:29:53.452 Write completed with error (sct=0, sc=8) 00:29:53.452 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 [2024-11-17 02:50:01.508689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 [2024-11-17 02:50:01.510858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 Write completed with error (sct=0, sc=8) 00:29:53.453 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 [2024-11-17 02:50:01.513682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 [2024-11-17 02:50:01.523486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.454 NVMe io qpair process completion error 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 [2024-11-17 02:50:01.529453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.454 Write completed with error (sct=0, sc=8) 00:29:53.454 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 [2024-11-17 02:50:01.531658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.455 starting I/O failed: -6 00:29:53.455 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 [2024-11-17 02:50:01.534303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 [2024-11-17 02:50:01.543875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.456 NVMe io qpair process completion error 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 starting I/O failed: -6 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.456 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 [2024-11-17 02:50:01.546036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 [2024-11-17 02:50:01.548348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 Write completed with error (sct=0, sc=8) 00:29:53.457 starting I/O failed: -6 00:29:53.457 [2024-11-17 02:50:01.550964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.457 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 [2024-11-17 02:50:01.564485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.458 NVMe io qpair process completion error 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 Write completed with error (sct=0, sc=8) 00:29:53.458 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 [2024-11-17 02:50:01.566911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 [2024-11-17 02:50:01.569064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 Write completed with error (sct=0, sc=8) 00:29:53.459 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 [2024-11-17 02:50:01.571728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 [2024-11-17 02:50:01.584200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.460 NVMe io qpair process completion error 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 starting I/O failed: -6 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.460 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 [2024-11-17 02:50:01.586275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 [2024-11-17 02:50:01.588507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 starting I/O failed: -6 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.461 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 [2024-11-17 02:50:01.591298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 [2024-11-17 02:50:01.603742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.462 NVMe io qpair process completion error 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 starting I/O failed: -6 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.462 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 [2024-11-17 02:50:01.605857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 [2024-11-17 02:50:01.607996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.463 Write completed with error (sct=0, sc=8) 00:29:53.463 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 [2024-11-17 02:50:01.610796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 [2024-11-17 02:50:01.621605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.464 NVMe io qpair process completion error 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.464 starting I/O failed: -6 00:29:53.464 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 [2024-11-17 02:50:01.623781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 [2024-11-17 02:50:01.626055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.465 starting I/O failed: -6 00:29:53.465 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 [2024-11-17 02:50:01.628748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 [2024-11-17 02:50:01.641322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.466 NVMe io qpair process completion error 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 starting I/O failed: -6 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.466 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 [2024-11-17 02:50:01.643487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 [2024-11-17 02:50:01.645736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 [2024-11-17 02:50:01.648391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.467 starting I/O failed: -6 00:29:53.467 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 [2024-11-17 02:50:01.660617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.468 NVMe io qpair process completion error 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 [2024-11-17 02:50:01.662502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 starting I/O failed: -6 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.468 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 [2024-11-17 02:50:01.664750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 [2024-11-17 02:50:01.667461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.469 starting I/O failed: -6 00:29:53.469 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 [2024-11-17 02:50:01.683015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.470 NVMe io qpair process completion error 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 [2024-11-17 02:50:01.685149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.470 starting I/O failed: -6 00:29:53.470 Write completed with error (sct=0, sc=8) 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 [2024-11-17 02:50:01.687121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:53.471 starting I/O failed: -6 00:29:53.471 starting I/O failed: -6 00:29:53.471 starting I/O failed: -6 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 [2024-11-17 02:50:01.690087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.471 Write completed with error (sct=0, sc=8) 00:29:53.471 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 Write completed with error (sct=0, sc=8) 00:29:53.472 starting I/O failed: -6 00:29:53.472 [2024-11-17 02:50:01.702564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.472 NVMe io qpair process completion error 00:29:53.472 Initializing NVMe Controllers 00:29:53.472 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:53.472 Controller IO queue size 128, less than required. 00:29:53.472 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.472 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:53.472 Controller IO queue size 128, less than required. 00:29:53.472 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.472 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:53.472 Controller IO queue size 128, less than required. 00:29:53.472 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.472 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:53.472 Controller IO queue size 128, less than required. 00:29:53.472 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.472 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:53.472 Controller IO queue size 128, less than required. 00:29:53.472 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.472 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:53.472 Controller IO queue size 128, less than required. 00:29:53.472 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.472 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:53.472 Controller IO queue size 128, less than required. 00:29:53.472 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.472 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:53.472 Controller IO queue size 128, less than required. 00:29:53.472 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.472 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:53.472 Controller IO queue size 128, less than required. 00:29:53.472 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.472 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:53.472 Controller IO queue size 128, less than required. 00:29:53.472 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:53.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:53.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:53.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:53.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:53.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:53.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:53.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:53.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:53.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:53.472 Initialization complete. Launching workers. 00:29:53.472 ======================================================== 00:29:53.472 Latency(us) 00:29:53.472 Device Information : IOPS MiB/s Average min max 00:29:53.472 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1407.03 60.46 87613.24 1847.81 195967.28 00:29:53.472 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1393.03 59.86 91741.17 1714.18 278980.67 00:29:53.472 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1376.91 59.16 89562.86 2207.95 185383.71 00:29:53.472 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1387.31 59.61 89040.58 1573.83 177621.30 00:29:53.472 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1382.00 59.38 89603.57 2201.71 187339.44 00:29:53.472 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1369.28 58.84 90637.99 2120.67 170299.91 00:29:53.472 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1357.82 58.34 91603.49 1633.43 218788.42 00:29:53.472 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1323.68 56.88 94150.20 2131.74 233443.32 00:29:53.472 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1346.80 57.87 92746.16 2318.34 249482.65 00:29:53.473 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1395.58 59.97 89688.53 2057.09 264273.36 00:29:53.473 ======================================================== 00:29:53.473 Total : 13739.44 590.37 90610.99 1573.83 278980.67 00:29:53.473 00:29:53.473 [2024-11-17 02:50:01.731273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015980 is same with the state(6) to be set 00:29:53.473 [2024-11-17 02:50:01.731425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000018180 is same with the state(6) to be set 00:29:53.473 [2024-11-17 02:50:01.731510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016d80 is same with the state(6) to be set 00:29:53.473 [2024-11-17 02:50:01.731593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017780 is same with the state(6) to be set 00:29:53.473 [2024-11-17 02:50:01.731674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015e80 is same with the state(6) to be set 00:29:53.473 [2024-11-17 02:50:01.731755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017280 is same with the state(6) to be set 00:29:53.473 [2024-11-17 02:50:01.731837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017c80 is same with the state(6) to be set 00:29:53.473 [2024-11-17 02:50:01.731918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:29:53.473 [2024-11-17 02:50:01.732000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:29:53.473 [2024-11-17 02:50:01.732111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000018680 is same with the state(6) to be set 00:29:53.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:56.007 02:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3058776 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3058776 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3058776 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:56.942 rmmod nvme_tcp 00:29:56.942 rmmod nvme_fabrics 00:29:56.942 rmmod nvme_keyring 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3058488 ']' 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3058488 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3058488 ']' 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3058488 00:29:56.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3058488) - No such process 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3058488 is not found' 00:29:56.942 Process with pid 3058488 is not found 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.942 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.472 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:59.472 00:29:59.472 real 0m13.328s 00:29:59.472 user 0m36.645s 00:29:59.472 sys 0m5.158s 00:29:59.472 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:59.472 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:59.472 ************************************ 00:29:59.472 END TEST nvmf_shutdown_tc4 00:29:59.472 ************************************ 00:29:59.472 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:59.472 00:29:59.472 real 0m55.322s 00:29:59.472 user 2m50.921s 00:29:59.472 sys 0m13.395s 00:29:59.472 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:59.472 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:59.472 ************************************ 00:29:59.472 END TEST nvmf_shutdown 00:29:59.472 ************************************ 00:29:59.472 02:50:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:59.472 02:50:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:59.472 02:50:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:59.472 02:50:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:59.472 ************************************ 00:29:59.472 START TEST nvmf_nsid 00:29:59.472 ************************************ 00:29:59.472 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:59.473 * Looking for test storage... 00:29:59.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:59.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.473 --rc genhtml_branch_coverage=1 00:29:59.473 --rc genhtml_function_coverage=1 00:29:59.473 --rc genhtml_legend=1 00:29:59.473 --rc geninfo_all_blocks=1 00:29:59.473 --rc geninfo_unexecuted_blocks=1 00:29:59.473 00:29:59.473 ' 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:59.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.473 --rc genhtml_branch_coverage=1 00:29:59.473 --rc genhtml_function_coverage=1 00:29:59.473 --rc genhtml_legend=1 00:29:59.473 --rc geninfo_all_blocks=1 00:29:59.473 --rc geninfo_unexecuted_blocks=1 00:29:59.473 00:29:59.473 ' 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:59.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.473 --rc genhtml_branch_coverage=1 00:29:59.473 --rc genhtml_function_coverage=1 00:29:59.473 --rc genhtml_legend=1 00:29:59.473 --rc geninfo_all_blocks=1 00:29:59.473 --rc geninfo_unexecuted_blocks=1 00:29:59.473 00:29:59.473 ' 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:59.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.473 --rc genhtml_branch_coverage=1 00:29:59.473 --rc genhtml_function_coverage=1 00:29:59.473 --rc genhtml_legend=1 00:29:59.473 --rc geninfo_all_blocks=1 00:29:59.473 --rc geninfo_unexecuted_blocks=1 00:29:59.473 00:29:59.473 ' 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:59.473 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:59.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:59.474 02:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:01.373 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:01.374 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:01.374 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:01.374 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:01.374 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:01.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:01.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:30:01.374 00:30:01.374 --- 10.0.0.2 ping statistics --- 00:30:01.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.374 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:01.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:01.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:30:01.374 00:30:01.374 --- 10.0.0.1 ping statistics --- 00:30:01.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.374 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:30:01.374 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:01.375 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:01.375 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:01.375 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:01.375 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:01.375 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:01.375 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:01.375 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:30:01.375 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:01.375 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:01.375 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:01.375 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3061692 00:30:01.375 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:30:01.375 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3061692 00:30:01.375 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3061692 ']' 00:30:01.375 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.375 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:01.375 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.375 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:01.375 02:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:01.375 [2024-11-17 02:50:09.800472] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:01.375 [2024-11-17 02:50:09.800616] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:01.633 [2024-11-17 02:50:09.944260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.633 [2024-11-17 02:50:10.076351] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.633 [2024-11-17 02:50:10.076453] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.633 [2024-11-17 02:50:10.076474] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.633 [2024-11-17 02:50:10.076510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.633 [2024-11-17 02:50:10.076526] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.633 [2024-11-17 02:50:10.077960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3061840 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=61c28ec7-c009-4b15-9c4f-1f9d7997be36 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=9efd26e3-b048-414e-840b-4c7aea77da72 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=e73069b6-b2a9-4dba-b297-dc4091c581d4 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:02.567 null0 00:30:02.567 null1 00:30:02.567 null2 00:30:02.567 [2024-11-17 02:50:10.810062] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:02.567 [2024-11-17 02:50:10.834402] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3061840 /var/tmp/tgt2.sock 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3061840 ']' 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:30:02.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:02.567 02:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:02.567 [2024-11-17 02:50:10.881324] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:02.567 [2024-11-17 02:50:10.881482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3061840 ] 00:30:02.826 [2024-11-17 02:50:11.034038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.826 [2024-11-17 02:50:11.161251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.760 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:03.760 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:03.760 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:30:04.387 [2024-11-17 02:50:12.504921] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:04.387 [2024-11-17 02:50:12.521373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:30:04.387 nvme0n1 nvme0n2 00:30:04.387 nvme1n1 00:30:04.387 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:30:04.387 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:30:04.387 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:05.031 02:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:30:05.031 02:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:30:05.032 02:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:30:05.032 02:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:30:05.032 02:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:30:05.032 02:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:30:05.032 02:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:30:05.032 02:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:05.032 02:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:05.032 02:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:05.032 02:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:30:05.032 02:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:30:05.032 02:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 61c28ec7-c009-4b15-9c4f-1f9d7997be36 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=61c28ec7c0094b159c4f1f9d7997be36 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 61C28EC7C0094B159C4F1F9D7997BE36 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 61C28EC7C0094B159C4F1F9D7997BE36 == \6\1\C\2\8\E\C\7\C\0\0\9\4\B\1\5\9\C\4\F\1\F\9\D\7\9\9\7\B\E\3\6 ]] 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 9efd26e3-b048-414e-840b-4c7aea77da72 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9efd26e3b048414e840b4c7aea77da72 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9EFD26E3B048414E840B4C7AEA77DA72 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 9EFD26E3B048414E840B4C7AEA77DA72 == \9\E\F\D\2\6\E\3\B\0\4\8\4\1\4\E\8\4\0\B\4\C\7\A\E\A\7\7\D\A\7\2 ]] 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid e73069b6-b2a9-4dba-b297-dc4091c581d4 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e73069b6b2a94dbab297dc4091c581d4 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E73069B6B2A94DBAB297DC4091C581D4 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ E73069B6B2A94DBAB297DC4091C581D4 == \E\7\3\0\6\9\B\6\B\2\A\9\4\D\B\A\B\2\9\7\D\C\4\0\9\1\C\5\8\1\D\4 ]] 00:30:05.967 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:30:06.225 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:30:06.225 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:30:06.225 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3061840 00:30:06.225 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3061840 ']' 00:30:06.225 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3061840 00:30:06.225 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:06.225 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:06.225 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3061840 00:30:06.225 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:06.225 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:06.225 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3061840' 00:30:06.225 killing process with pid 3061840 00:30:06.225 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3061840 00:30:06.225 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3061840 00:30:08.755 02:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:30:08.755 02:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:08.755 02:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:30:08.755 02:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:08.755 02:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:30:08.755 02:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:08.755 02:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:08.755 rmmod nvme_tcp 00:30:08.755 rmmod nvme_fabrics 00:30:08.755 rmmod nvme_keyring 00:30:08.755 02:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:08.755 02:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:30:08.755 02:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:30:08.755 02:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3061692 ']' 00:30:08.755 02:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3061692 00:30:08.755 02:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3061692 ']' 00:30:08.755 02:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3061692 00:30:08.755 02:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:08.755 02:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:08.755 02:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3061692 00:30:08.755 02:50:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:08.755 02:50:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:08.755 02:50:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3061692' 00:30:08.755 killing process with pid 3061692 00:30:08.755 02:50:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3061692 00:30:08.755 02:50:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3061692 00:30:09.689 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:09.689 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:09.689 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:09.689 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:30:09.689 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:30:09.689 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:30:09.690 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:09.690 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:09.690 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:09.690 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.690 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.690 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.226 02:50:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:12.226 00:30:12.226 real 0m12.671s 00:30:12.226 user 0m15.580s 00:30:12.226 sys 0m3.009s 00:30:12.226 02:50:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:12.226 02:50:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:12.226 ************************************ 00:30:12.226 END TEST nvmf_nsid 00:30:12.226 ************************************ 00:30:12.226 02:50:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:12.226 00:30:12.226 real 18m37.368s 00:30:12.226 user 51m18.177s 00:30:12.226 sys 3m32.872s 00:30:12.226 02:50:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:12.226 02:50:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:12.226 ************************************ 00:30:12.226 END TEST nvmf_target_extra 00:30:12.226 ************************************ 00:30:12.226 02:50:20 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:12.226 02:50:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:12.226 02:50:20 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:12.226 02:50:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:12.226 ************************************ 00:30:12.226 START TEST nvmf_host 00:30:12.226 ************************************ 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:12.226 * Looking for test storage... 00:30:12.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:12.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.226 --rc genhtml_branch_coverage=1 00:30:12.226 --rc genhtml_function_coverage=1 00:30:12.226 --rc genhtml_legend=1 00:30:12.226 --rc geninfo_all_blocks=1 00:30:12.226 --rc geninfo_unexecuted_blocks=1 00:30:12.226 00:30:12.226 ' 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:12.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.226 --rc genhtml_branch_coverage=1 00:30:12.226 --rc genhtml_function_coverage=1 00:30:12.226 --rc genhtml_legend=1 00:30:12.226 --rc geninfo_all_blocks=1 00:30:12.226 --rc geninfo_unexecuted_blocks=1 00:30:12.226 00:30:12.226 ' 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:12.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.226 --rc genhtml_branch_coverage=1 00:30:12.226 --rc genhtml_function_coverage=1 00:30:12.226 --rc genhtml_legend=1 00:30:12.226 --rc geninfo_all_blocks=1 00:30:12.226 --rc geninfo_unexecuted_blocks=1 00:30:12.226 00:30:12.226 ' 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:12.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.226 --rc genhtml_branch_coverage=1 00:30:12.226 --rc genhtml_function_coverage=1 00:30:12.226 --rc genhtml_legend=1 00:30:12.226 --rc geninfo_all_blocks=1 00:30:12.226 --rc geninfo_unexecuted_blocks=1 00:30:12.226 00:30:12.226 ' 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:12.226 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:12.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.227 ************************************ 00:30:12.227 START TEST nvmf_multicontroller 00:30:12.227 ************************************ 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:12.227 * Looking for test storage... 00:30:12.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:12.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.227 --rc genhtml_branch_coverage=1 00:30:12.227 --rc genhtml_function_coverage=1 00:30:12.227 --rc genhtml_legend=1 00:30:12.227 --rc geninfo_all_blocks=1 00:30:12.227 --rc geninfo_unexecuted_blocks=1 00:30:12.227 00:30:12.227 ' 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:12.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.227 --rc genhtml_branch_coverage=1 00:30:12.227 --rc genhtml_function_coverage=1 00:30:12.227 --rc genhtml_legend=1 00:30:12.227 --rc geninfo_all_blocks=1 00:30:12.227 --rc geninfo_unexecuted_blocks=1 00:30:12.227 00:30:12.227 ' 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:12.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.227 --rc genhtml_branch_coverage=1 00:30:12.227 --rc genhtml_function_coverage=1 00:30:12.227 --rc genhtml_legend=1 00:30:12.227 --rc geninfo_all_blocks=1 00:30:12.227 --rc geninfo_unexecuted_blocks=1 00:30:12.227 00:30:12.227 ' 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:12.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.227 --rc genhtml_branch_coverage=1 00:30:12.227 --rc genhtml_function_coverage=1 00:30:12.227 --rc genhtml_legend=1 00:30:12.227 --rc geninfo_all_blocks=1 00:30:12.227 --rc geninfo_unexecuted_blocks=1 00:30:12.227 00:30:12.227 ' 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:12.227 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:12.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:30:12.228 02:50:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:14.130 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.130 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:14.131 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:14.131 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:14.131 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:14.131 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:14.390 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:14.390 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:14.390 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:14.390 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:14.390 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:14.390 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:14.390 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:14.390 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:14.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:14.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:30:14.390 00:30:14.390 --- 10.0.0.2 ping statistics --- 00:30:14.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.390 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:30:14.390 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:14.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:14.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:30:14.390 00:30:14.390 --- 10.0.0.1 ping statistics --- 00:30:14.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.390 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:30:14.390 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:14.391 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:30:14.391 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:14.391 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:14.391 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:14.391 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:14.391 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:14.391 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:14.391 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:14.391 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:14.391 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:14.391 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:14.391 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:14.391 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3064672 00:30:14.391 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:14.391 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3064672 00:30:14.391 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3064672 ']' 00:30:14.391 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.391 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:14.391 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.391 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:14.391 02:50:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:14.391 [2024-11-17 02:50:22.805743] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:14.391 [2024-11-17 02:50:22.805908] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.649 [2024-11-17 02:50:22.977111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:14.907 [2024-11-17 02:50:23.120114] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.907 [2024-11-17 02:50:23.120187] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.907 [2024-11-17 02:50:23.120212] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.907 [2024-11-17 02:50:23.120250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.907 [2024-11-17 02:50:23.120268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.907 [2024-11-17 02:50:23.122833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:14.907 [2024-11-17 02:50:23.122931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.907 [2024-11-17 02:50:23.122935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:15.473 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:15.473 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:15.473 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:15.473 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:15.473 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.473 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.473 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:15.473 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.473 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.473 [2024-11-17 02:50:23.827154] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.473 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.473 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:15.473 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.473 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.731 Malloc0 00:30:15.731 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.731 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:15.731 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.731 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.731 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.731 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:15.732 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.732 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.732 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.732 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:15.732 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.732 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.732 [2024-11-17 02:50:23.956273] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.732 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.732 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:15.732 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.732 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.732 [2024-11-17 02:50:23.964052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:15.732 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.732 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:15.732 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.732 02:50:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.732 Malloc1 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3064949 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3064949 /var/tmp/bdevperf.sock 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3064949 ']' 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:15.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:15.732 02:50:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.108 NVMe0n1 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.108 1 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.108 request: 00:30:17.108 { 00:30:17.108 "name": "NVMe0", 00:30:17.108 "trtype": "tcp", 00:30:17.108 "traddr": "10.0.0.2", 00:30:17.108 "adrfam": "ipv4", 00:30:17.108 "trsvcid": "4420", 00:30:17.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:17.108 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:17.108 "hostaddr": "10.0.0.1", 00:30:17.108 "prchk_reftag": false, 00:30:17.108 "prchk_guard": false, 00:30:17.108 "hdgst": false, 00:30:17.108 "ddgst": false, 00:30:17.108 "allow_unrecognized_csi": false, 00:30:17.108 "method": "bdev_nvme_attach_controller", 00:30:17.108 "req_id": 1 00:30:17.108 } 00:30:17.108 Got JSON-RPC error response 00:30:17.108 response: 00:30:17.108 { 00:30:17.108 "code": -114, 00:30:17.108 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:17.108 } 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.108 request: 00:30:17.108 { 00:30:17.108 "name": "NVMe0", 00:30:17.108 "trtype": "tcp", 00:30:17.108 "traddr": "10.0.0.2", 00:30:17.108 "adrfam": "ipv4", 00:30:17.108 "trsvcid": "4420", 00:30:17.108 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:17.108 "hostaddr": "10.0.0.1", 00:30:17.108 "prchk_reftag": false, 00:30:17.108 "prchk_guard": false, 00:30:17.108 "hdgst": false, 00:30:17.108 "ddgst": false, 00:30:17.108 "allow_unrecognized_csi": false, 00:30:17.108 "method": "bdev_nvme_attach_controller", 00:30:17.108 "req_id": 1 00:30:17.108 } 00:30:17.108 Got JSON-RPC error response 00:30:17.108 response: 00:30:17.108 { 00:30:17.108 "code": -114, 00:30:17.108 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:17.108 } 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.108 request: 00:30:17.108 { 00:30:17.108 "name": "NVMe0", 00:30:17.108 "trtype": "tcp", 00:30:17.108 "traddr": "10.0.0.2", 00:30:17.108 "adrfam": "ipv4", 00:30:17.108 "trsvcid": "4420", 00:30:17.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:17.108 "hostaddr": "10.0.0.1", 00:30:17.108 "prchk_reftag": false, 00:30:17.108 "prchk_guard": false, 00:30:17.108 "hdgst": false, 00:30:17.108 "ddgst": false, 00:30:17.108 "multipath": "disable", 00:30:17.108 "allow_unrecognized_csi": false, 00:30:17.108 "method": "bdev_nvme_attach_controller", 00:30:17.108 "req_id": 1 00:30:17.108 } 00:30:17.108 Got JSON-RPC error response 00:30:17.108 response: 00:30:17.108 { 00:30:17.108 "code": -114, 00:30:17.108 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:30:17.108 } 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:17.108 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.109 request: 00:30:17.109 { 00:30:17.109 "name": "NVMe0", 00:30:17.109 "trtype": "tcp", 00:30:17.109 "traddr": "10.0.0.2", 00:30:17.109 "adrfam": "ipv4", 00:30:17.109 "trsvcid": "4420", 00:30:17.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:17.109 "hostaddr": "10.0.0.1", 00:30:17.109 "prchk_reftag": false, 00:30:17.109 "prchk_guard": false, 00:30:17.109 "hdgst": false, 00:30:17.109 "ddgst": false, 00:30:17.109 "multipath": "failover", 00:30:17.109 "allow_unrecognized_csi": false, 00:30:17.109 "method": "bdev_nvme_attach_controller", 00:30:17.109 "req_id": 1 00:30:17.109 } 00:30:17.109 Got JSON-RPC error response 00:30:17.109 response: 00:30:17.109 { 00:30:17.109 "code": -114, 00:30:17.109 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:17.109 } 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.109 NVMe0n1 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.109 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:17.109 02:50:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:18.484 { 00:30:18.484 "results": [ 00:30:18.484 { 00:30:18.484 "job": "NVMe0n1", 00:30:18.484 "core_mask": "0x1", 00:30:18.484 "workload": "write", 00:30:18.484 "status": "finished", 00:30:18.484 "queue_depth": 128, 00:30:18.484 "io_size": 4096, 00:30:18.484 "runtime": 1.007791, 00:30:18.484 "iops": 12350.775111109348, 00:30:18.484 "mibps": 48.24521527777089, 00:30:18.484 "io_failed": 0, 00:30:18.484 "io_timeout": 0, 00:30:18.484 "avg_latency_us": 10344.677871270484, 00:30:18.484 "min_latency_us": 8835.223703703703, 00:30:18.484 "max_latency_us": 22330.785185185185 00:30:18.484 } 00:30:18.484 ], 00:30:18.484 "core_count": 1 00:30:18.484 } 00:30:18.484 02:50:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:18.484 02:50:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.484 02:50:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.484 02:50:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.484 02:50:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:30:18.484 02:50:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3064949 00:30:18.484 02:50:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3064949 ']' 00:30:18.484 02:50:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3064949 00:30:18.484 02:50:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:18.484 02:50:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:18.484 02:50:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3064949 00:30:18.484 02:50:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:18.484 02:50:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:18.484 02:50:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3064949' 00:30:18.484 killing process with pid 3064949 00:30:18.484 02:50:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3064949 00:30:18.484 02:50:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3064949 00:30:19.418 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:30:19.419 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:19.419 [2024-11-17 02:50:24.158292] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:19.419 [2024-11-17 02:50:24.158447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3064949 ] 00:30:19.419 [2024-11-17 02:50:24.298190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.419 [2024-11-17 02:50:24.427742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.419 [2024-11-17 02:50:25.540727] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 8e0dfc62-ff70-45e6-8b91-5788842bcb5d already exists 00:30:19.419 [2024-11-17 02:50:25.540786] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:8e0dfc62-ff70-45e6-8b91-5788842bcb5d alias for bdev NVMe1n1 00:30:19.419 [2024-11-17 02:50:25.540834] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:19.419 Running I/O for 1 seconds... 00:30:19.419 12319.00 IOPS, 48.12 MiB/s 00:30:19.419 Latency(us) 00:30:19.419 [2024-11-17T01:50:27.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.419 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:19.419 NVMe0n1 : 1.01 12350.78 48.25 0.00 0.00 10344.68 8835.22 22330.79 00:30:19.419 [2024-11-17T01:50:27.879Z] =================================================================================================================== 00:30:19.419 [2024-11-17T01:50:27.879Z] Total : 12350.78 48.25 0.00 0.00 10344.68 8835.22 22330.79 00:30:19.419 Received shutdown signal, test time was about 1.000000 seconds 00:30:19.419 00:30:19.419 Latency(us) 00:30:19.419 [2024-11-17T01:50:27.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.419 [2024-11-17T01:50:27.879Z] =================================================================================================================== 00:30:19.419 [2024-11-17T01:50:27.879Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:19.419 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:19.419 rmmod nvme_tcp 00:30:19.419 rmmod nvme_fabrics 00:30:19.419 rmmod nvme_keyring 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3064672 ']' 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3064672 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3064672 ']' 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3064672 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3064672 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3064672' 00:30:19.419 killing process with pid 3064672 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3064672 00:30:19.419 02:50:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3064672 00:30:20.794 02:50:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:20.794 02:50:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:20.794 02:50:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:20.794 02:50:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:30:20.794 02:50:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:30:20.794 02:50:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:20.794 02:50:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:30:20.794 02:50:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:20.794 02:50:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:20.794 02:50:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.794 02:50:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.794 02:50:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:23.328 00:30:23.328 real 0m10.769s 00:30:23.328 user 0m21.982s 00:30:23.328 sys 0m2.773s 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:23.328 ************************************ 00:30:23.328 END TEST nvmf_multicontroller 00:30:23.328 ************************************ 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.328 ************************************ 00:30:23.328 START TEST nvmf_aer 00:30:23.328 ************************************ 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:23.328 * Looking for test storage... 00:30:23.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:23.328 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:23.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.329 --rc genhtml_branch_coverage=1 00:30:23.329 --rc genhtml_function_coverage=1 00:30:23.329 --rc genhtml_legend=1 00:30:23.329 --rc geninfo_all_blocks=1 00:30:23.329 --rc geninfo_unexecuted_blocks=1 00:30:23.329 00:30:23.329 ' 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:23.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.329 --rc genhtml_branch_coverage=1 00:30:23.329 --rc genhtml_function_coverage=1 00:30:23.329 --rc genhtml_legend=1 00:30:23.329 --rc geninfo_all_blocks=1 00:30:23.329 --rc geninfo_unexecuted_blocks=1 00:30:23.329 00:30:23.329 ' 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:23.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.329 --rc genhtml_branch_coverage=1 00:30:23.329 --rc genhtml_function_coverage=1 00:30:23.329 --rc genhtml_legend=1 00:30:23.329 --rc geninfo_all_blocks=1 00:30:23.329 --rc geninfo_unexecuted_blocks=1 00:30:23.329 00:30:23.329 ' 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:23.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.329 --rc genhtml_branch_coverage=1 00:30:23.329 --rc genhtml_function_coverage=1 00:30:23.329 --rc genhtml_legend=1 00:30:23.329 --rc geninfo_all_blocks=1 00:30:23.329 --rc geninfo_unexecuted_blocks=1 00:30:23.329 00:30:23.329 ' 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:23.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:23.329 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:23.330 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.330 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.330 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.330 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:23.330 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:23.330 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:23.330 02:50:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:25.233 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:25.233 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.233 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:25.234 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:25.234 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:25.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:25.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:30:25.234 00:30:25.234 --- 10.0.0.2 ping statistics --- 00:30:25.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.234 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:25.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:25.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:30:25.234 00:30:25.234 --- 10.0.0.1 ping statistics --- 00:30:25.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.234 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3067432 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3067432 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3067432 ']' 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:25.234 02:50:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:25.234 [2024-11-17 02:50:33.621593] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:25.234 [2024-11-17 02:50:33.621758] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:25.493 [2024-11-17 02:50:33.772184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:25.493 [2024-11-17 02:50:33.913503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:25.493 [2024-11-17 02:50:33.913583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:25.493 [2024-11-17 02:50:33.913608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:25.493 [2024-11-17 02:50:33.913646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:25.493 [2024-11-17 02:50:33.913663] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:25.493 [2024-11-17 02:50:33.916536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:25.493 [2024-11-17 02:50:33.916606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:25.493 [2024-11-17 02:50:33.916704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.493 [2024-11-17 02:50:33.916710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:26.429 [2024-11-17 02:50:34.650589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:26.429 Malloc0 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:26.429 [2024-11-17 02:50:34.779318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:26.429 [ 00:30:26.429 { 00:30:26.429 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:26.429 "subtype": "Discovery", 00:30:26.429 "listen_addresses": [], 00:30:26.429 "allow_any_host": true, 00:30:26.429 "hosts": [] 00:30:26.429 }, 00:30:26.429 { 00:30:26.429 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:26.429 "subtype": "NVMe", 00:30:26.429 "listen_addresses": [ 00:30:26.429 { 00:30:26.429 "trtype": "TCP", 00:30:26.429 "adrfam": "IPv4", 00:30:26.429 "traddr": "10.0.0.2", 00:30:26.429 "trsvcid": "4420" 00:30:26.429 } 00:30:26.429 ], 00:30:26.429 "allow_any_host": true, 00:30:26.429 "hosts": [], 00:30:26.429 "serial_number": "SPDK00000000000001", 00:30:26.429 "model_number": "SPDK bdev Controller", 00:30:26.429 "max_namespaces": 2, 00:30:26.429 "min_cntlid": 1, 00:30:26.429 "max_cntlid": 65519, 00:30:26.429 "namespaces": [ 00:30:26.429 { 00:30:26.429 "nsid": 1, 00:30:26.429 "bdev_name": "Malloc0", 00:30:26.429 "name": "Malloc0", 00:30:26.429 "nguid": "AC4F41F1610C4489AD403B2996465DBD", 00:30:26.429 "uuid": "ac4f41f1-610c-4489-ad40-3b2996465dbd" 00:30:26.429 } 00:30:26.429 ] 00:30:26.429 } 00:30:26.429 ] 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3067587 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:30:26.429 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:26.688 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:26.688 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:30:26.688 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:30:26.688 02:50:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:26.688 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:26.688 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:30:26.688 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:30:26.688 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:26.688 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:26.688 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:26.688 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:30:26.688 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:26.688 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.688 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:26.947 Malloc1 00:30:26.947 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.947 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:26.947 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.947 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:26.947 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.947 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:26.947 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.947 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:26.947 [ 00:30:26.947 { 00:30:26.947 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:26.947 "subtype": "Discovery", 00:30:26.947 "listen_addresses": [], 00:30:26.947 "allow_any_host": true, 00:30:26.947 "hosts": [] 00:30:26.947 }, 00:30:26.947 { 00:30:26.947 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:26.947 "subtype": "NVMe", 00:30:26.947 "listen_addresses": [ 00:30:26.947 { 00:30:26.947 "trtype": "TCP", 00:30:26.947 "adrfam": "IPv4", 00:30:26.947 "traddr": "10.0.0.2", 00:30:26.947 "trsvcid": "4420" 00:30:26.947 } 00:30:26.947 ], 00:30:26.947 "allow_any_host": true, 00:30:26.947 "hosts": [], 00:30:26.947 "serial_number": "SPDK00000000000001", 00:30:26.947 "model_number": "SPDK bdev Controller", 00:30:26.947 "max_namespaces": 2, 00:30:26.947 "min_cntlid": 1, 00:30:26.947 "max_cntlid": 65519, 00:30:26.947 "namespaces": [ 00:30:26.947 { 00:30:26.947 "nsid": 1, 00:30:26.947 "bdev_name": "Malloc0", 00:30:26.947 "name": "Malloc0", 00:30:26.947 "nguid": "AC4F41F1610C4489AD403B2996465DBD", 00:30:26.947 "uuid": "ac4f41f1-610c-4489-ad40-3b2996465dbd" 00:30:26.947 }, 00:30:26.947 { 00:30:26.947 "nsid": 2, 00:30:26.947 "bdev_name": "Malloc1", 00:30:26.947 "name": "Malloc1", 00:30:26.947 "nguid": "5B3358DB2E5D4B7DB8C93228E3EB31D3", 00:30:26.947 "uuid": "5b3358db-2e5d-4b7d-b8c9-3228e3eb31d3" 00:30:26.947 } 00:30:26.947 ] 00:30:26.947 } 00:30:26.947 ] 00:30:26.947 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.947 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3067587 00:30:26.947 Asynchronous Event Request test 00:30:26.947 Attaching to 10.0.0.2 00:30:26.947 Attached to 10.0.0.2 00:30:26.947 Registering asynchronous event callbacks... 00:30:26.947 Starting namespace attribute notice tests for all controllers... 00:30:26.947 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:26.947 aer_cb - Changed Namespace 00:30:26.947 Cleaning up... 00:30:26.947 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:26.947 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.947 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:27.206 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.206 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:27.206 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.206 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:27.464 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:27.465 rmmod nvme_tcp 00:30:27.465 rmmod nvme_fabrics 00:30:27.465 rmmod nvme_keyring 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3067432 ']' 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3067432 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3067432 ']' 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3067432 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3067432 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3067432' 00:30:27.465 killing process with pid 3067432 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3067432 00:30:27.465 02:50:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3067432 00:30:28.842 02:50:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:28.842 02:50:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:28.842 02:50:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:28.843 02:50:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:30:28.843 02:50:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:30:28.843 02:50:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:28.843 02:50:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:30:28.843 02:50:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:28.843 02:50:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:28.843 02:50:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.843 02:50:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:28.843 02:50:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.752 02:50:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:30.752 00:30:30.752 real 0m7.689s 00:30:30.752 user 0m11.667s 00:30:30.752 sys 0m2.263s 00:30:30.752 02:50:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:30.752 02:50:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:30.752 ************************************ 00:30:30.752 END TEST nvmf_aer 00:30:30.752 ************************************ 00:30:30.752 02:50:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:30.752 02:50:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:30.752 02:50:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:30.752 02:50:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.752 ************************************ 00:30:30.752 START TEST nvmf_async_init 00:30:30.752 ************************************ 00:30:30.752 02:50:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:30.752 * Looking for test storage... 00:30:30.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:30.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.752 --rc genhtml_branch_coverage=1 00:30:30.752 --rc genhtml_function_coverage=1 00:30:30.752 --rc genhtml_legend=1 00:30:30.752 --rc geninfo_all_blocks=1 00:30:30.752 --rc geninfo_unexecuted_blocks=1 00:30:30.752 00:30:30.752 ' 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:30.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.752 --rc genhtml_branch_coverage=1 00:30:30.752 --rc genhtml_function_coverage=1 00:30:30.752 --rc genhtml_legend=1 00:30:30.752 --rc geninfo_all_blocks=1 00:30:30.752 --rc geninfo_unexecuted_blocks=1 00:30:30.752 00:30:30.752 ' 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:30.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.752 --rc genhtml_branch_coverage=1 00:30:30.752 --rc genhtml_function_coverage=1 00:30:30.752 --rc genhtml_legend=1 00:30:30.752 --rc geninfo_all_blocks=1 00:30:30.752 --rc geninfo_unexecuted_blocks=1 00:30:30.752 00:30:30.752 ' 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:30.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.752 --rc genhtml_branch_coverage=1 00:30:30.752 --rc genhtml_function_coverage=1 00:30:30.752 --rc genhtml_legend=1 00:30:30.752 --rc geninfo_all_blocks=1 00:30:30.752 --rc geninfo_unexecuted_blocks=1 00:30:30.752 00:30:30.752 ' 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:30.752 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:30.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=147039dcefc9446da95d23591a2145ac 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:30.753 02:50:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:33.288 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:33.288 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:33.288 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:33.288 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:33.288 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:33.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:33.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:30:33.289 00:30:33.289 --- 10.0.0.2 ping statistics --- 00:30:33.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.289 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:33.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:33.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:30:33.289 00:30:33.289 --- 10.0.0.1 ping statistics --- 00:30:33.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.289 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3069784 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3069784 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3069784 ']' 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:33.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:33.289 02:50:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:33.289 [2024-11-17 02:50:41.400303] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:33.289 [2024-11-17 02:50:41.400453] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:33.289 [2024-11-17 02:50:41.542804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.289 [2024-11-17 02:50:41.674619] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:33.289 [2024-11-17 02:50:41.674729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:33.289 [2024-11-17 02:50:41.674755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:33.289 [2024-11-17 02:50:41.674793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:33.289 [2024-11-17 02:50:41.674810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:33.289 [2024-11-17 02:50:41.676379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.226 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:34.226 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.227 [2024-11-17 02:50:42.407902] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.227 null0 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 147039dcefc9446da95d23591a2145ac 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.227 [2024-11-17 02:50:42.448235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.227 nvme0n1 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.227 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.487 [ 00:30:34.487 { 00:30:34.487 "name": "nvme0n1", 00:30:34.487 "aliases": [ 00:30:34.487 "147039dc-efc9-446d-a95d-23591a2145ac" 00:30:34.487 ], 00:30:34.487 "product_name": "NVMe disk", 00:30:34.487 "block_size": 512, 00:30:34.487 "num_blocks": 2097152, 00:30:34.487 "uuid": "147039dc-efc9-446d-a95d-23591a2145ac", 00:30:34.487 "numa_id": 0, 00:30:34.487 "assigned_rate_limits": { 00:30:34.487 "rw_ios_per_sec": 0, 00:30:34.487 "rw_mbytes_per_sec": 0, 00:30:34.487 "r_mbytes_per_sec": 0, 00:30:34.487 "w_mbytes_per_sec": 0 00:30:34.487 }, 00:30:34.487 "claimed": false, 00:30:34.487 "zoned": false, 00:30:34.487 "supported_io_types": { 00:30:34.487 "read": true, 00:30:34.487 "write": true, 00:30:34.487 "unmap": false, 00:30:34.487 "flush": true, 00:30:34.487 "reset": true, 00:30:34.487 "nvme_admin": true, 00:30:34.487 "nvme_io": true, 00:30:34.487 "nvme_io_md": false, 00:30:34.487 "write_zeroes": true, 00:30:34.487 "zcopy": false, 00:30:34.487 "get_zone_info": false, 00:30:34.487 "zone_management": false, 00:30:34.487 "zone_append": false, 00:30:34.487 "compare": true, 00:30:34.487 "compare_and_write": true, 00:30:34.487 "abort": true, 00:30:34.487 "seek_hole": false, 00:30:34.487 "seek_data": false, 00:30:34.487 "copy": true, 00:30:34.487 "nvme_iov_md": false 00:30:34.487 }, 00:30:34.487 "memory_domains": [ 00:30:34.487 { 00:30:34.487 "dma_device_id": "system", 00:30:34.487 "dma_device_type": 1 00:30:34.487 } 00:30:34.487 ], 00:30:34.487 "driver_specific": { 00:30:34.487 "nvme": [ 00:30:34.487 { 00:30:34.487 "trid": { 00:30:34.487 "trtype": "TCP", 00:30:34.487 "adrfam": "IPv4", 00:30:34.487 "traddr": "10.0.0.2", 00:30:34.487 "trsvcid": "4420", 00:30:34.487 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:34.487 }, 00:30:34.487 "ctrlr_data": { 00:30:34.487 "cntlid": 1, 00:30:34.487 "vendor_id": "0x8086", 00:30:34.487 "model_number": "SPDK bdev Controller", 00:30:34.487 "serial_number": "00000000000000000000", 00:30:34.487 "firmware_revision": "25.01", 00:30:34.487 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:34.487 "oacs": { 00:30:34.487 "security": 0, 00:30:34.487 "format": 0, 00:30:34.487 "firmware": 0, 00:30:34.487 "ns_manage": 0 00:30:34.487 }, 00:30:34.487 "multi_ctrlr": true, 00:30:34.487 "ana_reporting": false 00:30:34.487 }, 00:30:34.487 "vs": { 00:30:34.487 "nvme_version": "1.3" 00:30:34.487 }, 00:30:34.487 "ns_data": { 00:30:34.487 "id": 1, 00:30:34.487 "can_share": true 00:30:34.487 } 00:30:34.487 } 00:30:34.487 ], 00:30:34.487 "mp_policy": "active_passive" 00:30:34.487 } 00:30:34.487 } 00:30:34.487 ] 00:30:34.487 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.487 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:34.487 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.487 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.487 [2024-11-17 02:50:42.705135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:34.487 [2024-11-17 02:50:42.705285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:30:34.487 [2024-11-17 02:50:42.837336] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:34.487 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.487 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:34.487 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.487 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.487 [ 00:30:34.487 { 00:30:34.487 "name": "nvme0n1", 00:30:34.487 "aliases": [ 00:30:34.487 "147039dc-efc9-446d-a95d-23591a2145ac" 00:30:34.487 ], 00:30:34.487 "product_name": "NVMe disk", 00:30:34.487 "block_size": 512, 00:30:34.487 "num_blocks": 2097152, 00:30:34.487 "uuid": "147039dc-efc9-446d-a95d-23591a2145ac", 00:30:34.487 "numa_id": 0, 00:30:34.487 "assigned_rate_limits": { 00:30:34.487 "rw_ios_per_sec": 0, 00:30:34.487 "rw_mbytes_per_sec": 0, 00:30:34.487 "r_mbytes_per_sec": 0, 00:30:34.487 "w_mbytes_per_sec": 0 00:30:34.487 }, 00:30:34.487 "claimed": false, 00:30:34.487 "zoned": false, 00:30:34.487 "supported_io_types": { 00:30:34.487 "read": true, 00:30:34.487 "write": true, 00:30:34.487 "unmap": false, 00:30:34.487 "flush": true, 00:30:34.487 "reset": true, 00:30:34.487 "nvme_admin": true, 00:30:34.487 "nvme_io": true, 00:30:34.487 "nvme_io_md": false, 00:30:34.487 "write_zeroes": true, 00:30:34.487 "zcopy": false, 00:30:34.487 "get_zone_info": false, 00:30:34.487 "zone_management": false, 00:30:34.487 "zone_append": false, 00:30:34.487 "compare": true, 00:30:34.487 "compare_and_write": true, 00:30:34.487 "abort": true, 00:30:34.487 "seek_hole": false, 00:30:34.487 "seek_data": false, 00:30:34.487 "copy": true, 00:30:34.487 "nvme_iov_md": false 00:30:34.487 }, 00:30:34.487 "memory_domains": [ 00:30:34.487 { 00:30:34.487 "dma_device_id": "system", 00:30:34.487 "dma_device_type": 1 00:30:34.487 } 00:30:34.487 ], 00:30:34.487 "driver_specific": { 00:30:34.487 "nvme": [ 00:30:34.487 { 00:30:34.487 "trid": { 00:30:34.487 "trtype": "TCP", 00:30:34.488 "adrfam": "IPv4", 00:30:34.488 "traddr": "10.0.0.2", 00:30:34.488 "trsvcid": "4420", 00:30:34.488 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:34.488 }, 00:30:34.488 "ctrlr_data": { 00:30:34.488 "cntlid": 2, 00:30:34.488 "vendor_id": "0x8086", 00:30:34.488 "model_number": "SPDK bdev Controller", 00:30:34.488 "serial_number": "00000000000000000000", 00:30:34.488 "firmware_revision": "25.01", 00:30:34.488 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:34.488 "oacs": { 00:30:34.488 "security": 0, 00:30:34.488 "format": 0, 00:30:34.488 "firmware": 0, 00:30:34.488 "ns_manage": 0 00:30:34.488 }, 00:30:34.488 "multi_ctrlr": true, 00:30:34.488 "ana_reporting": false 00:30:34.488 }, 00:30:34.488 "vs": { 00:30:34.488 "nvme_version": "1.3" 00:30:34.488 }, 00:30:34.488 "ns_data": { 00:30:34.488 "id": 1, 00:30:34.488 "can_share": true 00:30:34.488 } 00:30:34.488 } 00:30:34.488 ], 00:30:34.488 "mp_policy": "active_passive" 00:30:34.488 } 00:30:34.488 } 00:30:34.488 ] 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.r1EzOUvI3S 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.r1EzOUvI3S 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.r1EzOUvI3S 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.488 [2024-11-17 02:50:42.902027] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:34.488 [2024-11-17 02:50:42.902337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.488 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.488 [2024-11-17 02:50:42.918064] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:34.747 nvme0n1 00:30:34.747 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.747 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:34.747 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.747 02:50:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.747 [ 00:30:34.747 { 00:30:34.747 "name": "nvme0n1", 00:30:34.747 "aliases": [ 00:30:34.747 "147039dc-efc9-446d-a95d-23591a2145ac" 00:30:34.747 ], 00:30:34.747 "product_name": "NVMe disk", 00:30:34.747 "block_size": 512, 00:30:34.747 "num_blocks": 2097152, 00:30:34.747 "uuid": "147039dc-efc9-446d-a95d-23591a2145ac", 00:30:34.747 "numa_id": 0, 00:30:34.747 "assigned_rate_limits": { 00:30:34.747 "rw_ios_per_sec": 0, 00:30:34.747 "rw_mbytes_per_sec": 0, 00:30:34.747 "r_mbytes_per_sec": 0, 00:30:34.747 "w_mbytes_per_sec": 0 00:30:34.747 }, 00:30:34.747 "claimed": false, 00:30:34.747 "zoned": false, 00:30:34.747 "supported_io_types": { 00:30:34.747 "read": true, 00:30:34.747 "write": true, 00:30:34.747 "unmap": false, 00:30:34.747 "flush": true, 00:30:34.747 "reset": true, 00:30:34.747 "nvme_admin": true, 00:30:34.747 "nvme_io": true, 00:30:34.747 "nvme_io_md": false, 00:30:34.747 "write_zeroes": true, 00:30:34.747 "zcopy": false, 00:30:34.747 "get_zone_info": false, 00:30:34.747 "zone_management": false, 00:30:34.747 "zone_append": false, 00:30:34.747 "compare": true, 00:30:34.747 "compare_and_write": true, 00:30:34.747 "abort": true, 00:30:34.747 "seek_hole": false, 00:30:34.747 "seek_data": false, 00:30:34.747 "copy": true, 00:30:34.747 "nvme_iov_md": false 00:30:34.747 }, 00:30:34.747 "memory_domains": [ 00:30:34.747 { 00:30:34.747 "dma_device_id": "system", 00:30:34.747 "dma_device_type": 1 00:30:34.747 } 00:30:34.748 ], 00:30:34.748 "driver_specific": { 00:30:34.748 "nvme": [ 00:30:34.748 { 00:30:34.748 "trid": { 00:30:34.748 "trtype": "TCP", 00:30:34.748 "adrfam": "IPv4", 00:30:34.748 "traddr": "10.0.0.2", 00:30:34.748 "trsvcid": "4421", 00:30:34.748 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:34.748 }, 00:30:34.748 "ctrlr_data": { 00:30:34.748 "cntlid": 3, 00:30:34.748 "vendor_id": "0x8086", 00:30:34.748 "model_number": "SPDK bdev Controller", 00:30:34.748 "serial_number": "00000000000000000000", 00:30:34.748 "firmware_revision": "25.01", 00:30:34.748 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:34.748 "oacs": { 00:30:34.748 "security": 0, 00:30:34.748 "format": 0, 00:30:34.748 "firmware": 0, 00:30:34.748 "ns_manage": 0 00:30:34.748 }, 00:30:34.748 "multi_ctrlr": true, 00:30:34.748 "ana_reporting": false 00:30:34.748 }, 00:30:34.748 "vs": { 00:30:34.748 "nvme_version": "1.3" 00:30:34.748 }, 00:30:34.748 "ns_data": { 00:30:34.748 "id": 1, 00:30:34.748 "can_share": true 00:30:34.748 } 00:30:34.748 } 00:30:34.748 ], 00:30:34.748 "mp_policy": "active_passive" 00:30:34.748 } 00:30:34.748 } 00:30:34.748 ] 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.r1EzOUvI3S 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:34.748 rmmod nvme_tcp 00:30:34.748 rmmod nvme_fabrics 00:30:34.748 rmmod nvme_keyring 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3069784 ']' 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3069784 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3069784 ']' 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3069784 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3069784 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3069784' 00:30:34.748 killing process with pid 3069784 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3069784 00:30:34.748 02:50:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3069784 00:30:36.124 02:50:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:36.124 02:50:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:36.124 02:50:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:36.124 02:50:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:30:36.124 02:50:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:30:36.124 02:50:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:36.124 02:50:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:30:36.124 02:50:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:36.124 02:50:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:36.124 02:50:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.124 02:50:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.124 02:50:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.088 02:50:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:38.088 00:30:38.088 real 0m7.394s 00:30:38.088 user 0m3.991s 00:30:38.088 sys 0m2.123s 00:30:38.088 02:50:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:38.088 02:50:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:38.088 ************************************ 00:30:38.088 END TEST nvmf_async_init 00:30:38.088 ************************************ 00:30:38.088 02:50:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:38.088 02:50:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:38.088 02:50:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.088 02:50:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.088 ************************************ 00:30:38.088 START TEST dma 00:30:38.088 ************************************ 00:30:38.088 02:50:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:38.088 * Looking for test storage... 00:30:38.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:38.088 02:50:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:38.088 02:50:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:30:38.088 02:50:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:38.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.348 --rc genhtml_branch_coverage=1 00:30:38.348 --rc genhtml_function_coverage=1 00:30:38.348 --rc genhtml_legend=1 00:30:38.348 --rc geninfo_all_blocks=1 00:30:38.348 --rc geninfo_unexecuted_blocks=1 00:30:38.348 00:30:38.348 ' 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:38.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.348 --rc genhtml_branch_coverage=1 00:30:38.348 --rc genhtml_function_coverage=1 00:30:38.348 --rc genhtml_legend=1 00:30:38.348 --rc geninfo_all_blocks=1 00:30:38.348 --rc geninfo_unexecuted_blocks=1 00:30:38.348 00:30:38.348 ' 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:38.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.348 --rc genhtml_branch_coverage=1 00:30:38.348 --rc genhtml_function_coverage=1 00:30:38.348 --rc genhtml_legend=1 00:30:38.348 --rc geninfo_all_blocks=1 00:30:38.348 --rc geninfo_unexecuted_blocks=1 00:30:38.348 00:30:38.348 ' 00:30:38.348 02:50:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:38.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.348 --rc genhtml_branch_coverage=1 00:30:38.348 --rc genhtml_function_coverage=1 00:30:38.348 --rc genhtml_legend=1 00:30:38.348 --rc geninfo_all_blocks=1 00:30:38.348 --rc geninfo_unexecuted_blocks=1 00:30:38.348 00:30:38.348 ' 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:38.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:30:38.349 00:30:38.349 real 0m0.165s 00:30:38.349 user 0m0.114s 00:30:38.349 sys 0m0.061s 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:38.349 ************************************ 00:30:38.349 END TEST dma 00:30:38.349 ************************************ 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.349 ************************************ 00:30:38.349 START TEST nvmf_identify 00:30:38.349 ************************************ 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:38.349 * Looking for test storage... 00:30:38.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:38.349 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:38.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.349 --rc genhtml_branch_coverage=1 00:30:38.349 --rc genhtml_function_coverage=1 00:30:38.350 --rc genhtml_legend=1 00:30:38.350 --rc geninfo_all_blocks=1 00:30:38.350 --rc geninfo_unexecuted_blocks=1 00:30:38.350 00:30:38.350 ' 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:38.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.350 --rc genhtml_branch_coverage=1 00:30:38.350 --rc genhtml_function_coverage=1 00:30:38.350 --rc genhtml_legend=1 00:30:38.350 --rc geninfo_all_blocks=1 00:30:38.350 --rc geninfo_unexecuted_blocks=1 00:30:38.350 00:30:38.350 ' 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:38.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.350 --rc genhtml_branch_coverage=1 00:30:38.350 --rc genhtml_function_coverage=1 00:30:38.350 --rc genhtml_legend=1 00:30:38.350 --rc geninfo_all_blocks=1 00:30:38.350 --rc geninfo_unexecuted_blocks=1 00:30:38.350 00:30:38.350 ' 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:38.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.350 --rc genhtml_branch_coverage=1 00:30:38.350 --rc genhtml_function_coverage=1 00:30:38.350 --rc genhtml_legend=1 00:30:38.350 --rc geninfo_all_blocks=1 00:30:38.350 --rc geninfo_unexecuted_blocks=1 00:30:38.350 00:30:38.350 ' 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:38.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:38.350 02:50:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:40.885 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:40.885 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:30:40.885 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:40.885 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:40.885 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:40.885 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:40.885 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:40.885 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:30:40.885 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:40.885 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:30:40.885 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:30:40.885 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:30:40.885 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:30:40.885 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:30:40.885 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:30:40.885 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:40.885 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:40.885 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:40.885 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:40.886 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:40.886 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:40.886 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:40.886 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:40.886 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:40.887 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:40.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:40.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:30:40.887 00:30:40.887 --- 10.0.0.2 ping statistics --- 00:30:40.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.887 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:30:40.887 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:40.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:40.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:30:40.887 00:30:40.887 --- 10.0.0.1 ping statistics --- 00:30:40.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.887 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:30:40.887 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:40.887 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:30:40.887 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:40.887 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:40.887 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:40.887 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:40.887 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:40.887 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:40.887 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:40.887 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:40.887 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:40.887 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:40.887 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3072069 00:30:40.887 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:40.887 02:50:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:40.887 02:50:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3072069 00:30:40.887 02:50:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3072069 ']' 00:30:40.887 02:50:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:40.887 02:50:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:40.887 02:50:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:40.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:40.887 02:50:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:40.887 02:50:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:40.887 [2024-11-17 02:50:49.103532] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:40.887 [2024-11-17 02:50:49.103691] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:40.887 [2024-11-17 02:50:49.260773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:41.145 [2024-11-17 02:50:49.403972] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:41.145 [2024-11-17 02:50:49.404044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:41.145 [2024-11-17 02:50:49.404069] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:41.145 [2024-11-17 02:50:49.404093] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:41.145 [2024-11-17 02:50:49.404137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:41.145 [2024-11-17 02:50:49.406803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.145 [2024-11-17 02:50:49.406871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:41.145 [2024-11-17 02:50:49.406959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.145 [2024-11-17 02:50:49.406966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:41.711 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:41.711 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:30:41.711 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:41.711 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.711 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:41.711 [2024-11-17 02:50:50.089737] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:41.711 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.711 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:41.711 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:41.711 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:41.711 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:41.711 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.711 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:41.971 Malloc0 00:30:41.971 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.971 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:41.971 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.971 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:41.971 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.971 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:41.971 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.971 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:41.971 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.971 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:41.971 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.971 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:41.971 [2024-11-17 02:50:50.237497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.971 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.971 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:41.971 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.971 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:41.971 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.971 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:41.971 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.971 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:41.971 [ 00:30:41.971 { 00:30:41.971 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:41.971 "subtype": "Discovery", 00:30:41.971 "listen_addresses": [ 00:30:41.971 { 00:30:41.971 "trtype": "TCP", 00:30:41.971 "adrfam": "IPv4", 00:30:41.971 "traddr": "10.0.0.2", 00:30:41.971 "trsvcid": "4420" 00:30:41.971 } 00:30:41.971 ], 00:30:41.971 "allow_any_host": true, 00:30:41.971 "hosts": [] 00:30:41.971 }, 00:30:41.971 { 00:30:41.971 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:41.971 "subtype": "NVMe", 00:30:41.971 "listen_addresses": [ 00:30:41.971 { 00:30:41.971 "trtype": "TCP", 00:30:41.971 "adrfam": "IPv4", 00:30:41.971 "traddr": "10.0.0.2", 00:30:41.971 "trsvcid": "4420" 00:30:41.971 } 00:30:41.971 ], 00:30:41.971 "allow_any_host": true, 00:30:41.971 "hosts": [], 00:30:41.971 "serial_number": "SPDK00000000000001", 00:30:41.971 "model_number": "SPDK bdev Controller", 00:30:41.971 "max_namespaces": 32, 00:30:41.971 "min_cntlid": 1, 00:30:41.971 "max_cntlid": 65519, 00:30:41.971 "namespaces": [ 00:30:41.971 { 00:30:41.971 "nsid": 1, 00:30:41.971 "bdev_name": "Malloc0", 00:30:41.971 "name": "Malloc0", 00:30:41.971 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:41.971 "eui64": "ABCDEF0123456789", 00:30:41.971 "uuid": "cd60de69-c06f-4bd9-b0e9-268ede35ed2d" 00:30:41.971 } 00:30:41.971 ] 00:30:41.971 } 00:30:41.971 ] 00:30:41.971 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.971 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:41.971 [2024-11-17 02:50:50.305793] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:41.971 [2024-11-17 02:50:50.305884] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072312 ] 00:30:41.971 [2024-11-17 02:50:50.383431] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:30:41.971 [2024-11-17 02:50:50.383583] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:41.971 [2024-11-17 02:50:50.383605] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:41.971 [2024-11-17 02:50:50.383644] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:41.971 [2024-11-17 02:50:50.383671] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:41.971 [2024-11-17 02:50:50.387694] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:30:41.971 [2024-11-17 02:50:50.387790] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:41.971 [2024-11-17 02:50:50.394117] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:41.972 [2024-11-17 02:50:50.394158] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:41.972 [2024-11-17 02:50:50.394176] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:41.972 [2024-11-17 02:50:50.394188] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:41.972 [2024-11-17 02:50:50.394270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.972 [2024-11-17 02:50:50.394293] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.972 [2024-11-17 02:50:50.394318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:41.972 [2024-11-17 02:50:50.394361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:41.972 [2024-11-17 02:50:50.394407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:41.972 [2024-11-17 02:50:50.401146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.972 [2024-11-17 02:50:50.401181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.972 [2024-11-17 02:50:50.401194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.972 [2024-11-17 02:50:50.401209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:41.972 [2024-11-17 02:50:50.401246] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:41.972 [2024-11-17 02:50:50.401270] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:30:41.972 [2024-11-17 02:50:50.401290] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:30:41.972 [2024-11-17 02:50:50.401320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.972 [2024-11-17 02:50:50.401336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.972 [2024-11-17 02:50:50.401352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:41.972 [2024-11-17 02:50:50.401373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.972 [2024-11-17 02:50:50.401427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:41.972 [2024-11-17 02:50:50.401589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.972 [2024-11-17 02:50:50.401616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.972 [2024-11-17 02:50:50.401630] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.972 [2024-11-17 02:50:50.401647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:41.972 [2024-11-17 02:50:50.401672] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:30:41.972 [2024-11-17 02:50:50.401698] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:30:41.972 [2024-11-17 02:50:50.401725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.972 [2024-11-17 02:50:50.401750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.972 [2024-11-17 02:50:50.401778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:41.972 [2024-11-17 02:50:50.401803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.972 [2024-11-17 02:50:50.401854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:41.972 [2024-11-17 02:50:50.402000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.972 [2024-11-17 02:50:50.402022] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.972 [2024-11-17 02:50:50.402034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.972 [2024-11-17 02:50:50.402045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:41.972 [2024-11-17 02:50:50.402061] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:30:41.972 [2024-11-17 02:50:50.402088] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:41.972 [2024-11-17 02:50:50.402118] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.972 [2024-11-17 02:50:50.402138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.972 [2024-11-17 02:50:50.402151] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:41.972 [2024-11-17 02:50:50.402176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.972 [2024-11-17 02:50:50.402209] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:41.972 [2024-11-17 02:50:50.402360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.972 [2024-11-17 02:50:50.402382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.972 [2024-11-17 02:50:50.402397] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.972 [2024-11-17 02:50:50.402409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:41.972 [2024-11-17 02:50:50.402425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:41.972 [2024-11-17 02:50:50.402455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.972 [2024-11-17 02:50:50.402472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.972 [2024-11-17 02:50:50.402484] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:41.972 [2024-11-17 02:50:50.402518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.972 [2024-11-17 02:50:50.402551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:41.972 [2024-11-17 02:50:50.402723] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.972 [2024-11-17 02:50:50.402748] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.973 [2024-11-17 02:50:50.402761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.973 [2024-11-17 02:50:50.402772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:41.973 [2024-11-17 02:50:50.402788] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:41.973 [2024-11-17 02:50:50.402804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:41.973 [2024-11-17 02:50:50.402839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:41.973 [2024-11-17 02:50:50.402959] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:30:41.973 [2024-11-17 02:50:50.402974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:41.973 [2024-11-17 02:50:50.403001] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.973 [2024-11-17 02:50:50.403015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.973 [2024-11-17 02:50:50.403027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:41.973 [2024-11-17 02:50:50.403047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.973 [2024-11-17 02:50:50.403079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:41.973 [2024-11-17 02:50:50.403222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.973 [2024-11-17 02:50:50.403244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.973 [2024-11-17 02:50:50.403256] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.973 [2024-11-17 02:50:50.403267] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:41.973 [2024-11-17 02:50:50.403283] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:41.973 [2024-11-17 02:50:50.403320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.973 [2024-11-17 02:50:50.403337] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.973 [2024-11-17 02:50:50.403350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:41.973 [2024-11-17 02:50:50.403389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.973 [2024-11-17 02:50:50.403427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:41.973 [2024-11-17 02:50:50.403603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.973 [2024-11-17 02:50:50.403625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.973 [2024-11-17 02:50:50.403642] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.973 [2024-11-17 02:50:50.403658] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:41.973 [2024-11-17 02:50:50.403674] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:41.973 [2024-11-17 02:50:50.403688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:41.973 [2024-11-17 02:50:50.403713] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:30:41.973 [2024-11-17 02:50:50.403735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:41.973 [2024-11-17 02:50:50.403783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.973 [2024-11-17 02:50:50.403798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:41.973 [2024-11-17 02:50:50.403827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.973 [2024-11-17 02:50:50.403860] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:41.973 [2024-11-17 02:50:50.404067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:41.973 [2024-11-17 02:50:50.404093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:41.973 [2024-11-17 02:50:50.404118] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:41.973 [2024-11-17 02:50:50.404144] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:41.973 [2024-11-17 02:50:50.404177] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:41.973 [2024-11-17 02:50:50.404192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.973 [2024-11-17 02:50:50.404215] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:41.973 [2024-11-17 02:50:50.404230] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:41.973 [2024-11-17 02:50:50.404257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.973 [2024-11-17 02:50:50.404275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.973 [2024-11-17 02:50:50.404287] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.973 [2024-11-17 02:50:50.404298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:41.973 [2024-11-17 02:50:50.404325] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:30:41.973 [2024-11-17 02:50:50.404343] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:30:41.973 [2024-11-17 02:50:50.404356] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:30:41.973 [2024-11-17 02:50:50.404396] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:30:41.973 [2024-11-17 02:50:50.404412] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:30:41.973 [2024-11-17 02:50:50.404430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:30:41.973 [2024-11-17 02:50:50.404478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:41.973 [2024-11-17 02:50:50.404501] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.973 [2024-11-17 02:50:50.404514] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.973 [2024-11-17 02:50:50.404525] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:41.973 [2024-11-17 02:50:50.404544] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:41.973 [2024-11-17 02:50:50.404575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:41.973 [2024-11-17 02:50:50.404733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.973 [2024-11-17 02:50:50.404756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.973 [2024-11-17 02:50:50.404771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.973 [2024-11-17 02:50:50.404783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:41.973 [2024-11-17 02:50:50.404813] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.973 [2024-11-17 02:50:50.404833] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.973 [2024-11-17 02:50:50.404844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:41.974 [2024-11-17 02:50:50.404867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:41.974 [2024-11-17 02:50:50.404906] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.974 [2024-11-17 02:50:50.404919] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.974 [2024-11-17 02:50:50.404929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:41.974 [2024-11-17 02:50:50.404945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:41.974 [2024-11-17 02:50:50.404961] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.974 [2024-11-17 02:50:50.404973] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.974 [2024-11-17 02:50:50.404983] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:41.974 [2024-11-17 02:50:50.405012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:41.974 [2024-11-17 02:50:50.405028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.974 [2024-11-17 02:50:50.405039] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.974 [2024-11-17 02:50:50.405049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:41.974 [2024-11-17 02:50:50.405068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:41.974 [2024-11-17 02:50:50.405084] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:41.974 [2024-11-17 02:50:50.409131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:41.974 [2024-11-17 02:50:50.409157] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.974 [2024-11-17 02:50:50.409171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:41.974 [2024-11-17 02:50:50.409190] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.974 [2024-11-17 02:50:50.409226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:41.974 [2024-11-17 02:50:50.409259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:41.974 [2024-11-17 02:50:50.409277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:41.974 [2024-11-17 02:50:50.409290] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:41.974 [2024-11-17 02:50:50.409303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:41.974 [2024-11-17 02:50:50.409512] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.974 [2024-11-17 02:50:50.409538] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.974 [2024-11-17 02:50:50.409570] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.974 [2024-11-17 02:50:50.409584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:41.974 [2024-11-17 02:50:50.409602] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:30:41.974 [2024-11-17 02:50:50.409623] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:30:41.974 [2024-11-17 02:50:50.409657] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.974 [2024-11-17 02:50:50.409674] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:41.974 [2024-11-17 02:50:50.409694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.974 [2024-11-17 02:50:50.409730] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:41.974 [2024-11-17 02:50:50.409936] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:41.974 [2024-11-17 02:50:50.409961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:41.974 [2024-11-17 02:50:50.409974] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:41.974 [2024-11-17 02:50:50.409986] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:41.974 [2024-11-17 02:50:50.410020] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:41.974 [2024-11-17 02:50:50.410034] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.974 [2024-11-17 02:50:50.410067] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:41.974 [2024-11-17 02:50:50.410084] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:41.974 [2024-11-17 02:50:50.410113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.974 [2024-11-17 02:50:50.410133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.974 [2024-11-17 02:50:50.410144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.974 [2024-11-17 02:50:50.410156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:41.974 [2024-11-17 02:50:50.410197] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:30:41.974 [2024-11-17 02:50:50.410289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.974 [2024-11-17 02:50:50.410307] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:41.974 [2024-11-17 02:50:50.410332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.974 [2024-11-17 02:50:50.410360] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.974 [2024-11-17 02:50:50.410374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.974 [2024-11-17 02:50:50.410385] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:41.974 [2024-11-17 02:50:50.410403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:41.974 [2024-11-17 02:50:50.410455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:41.974 [2024-11-17 02:50:50.410473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:41.974 [2024-11-17 02:50:50.410784] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:41.974 [2024-11-17 02:50:50.410811] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:41.974 [2024-11-17 02:50:50.410824] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:41.974 [2024-11-17 02:50:50.410836] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=1024, cccid=4 00:30:41.974 [2024-11-17 02:50:50.410849] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=1024 00:30:41.974 [2024-11-17 02:50:50.410861] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.974 [2024-11-17 02:50:50.410903] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:41.974 [2024-11-17 02:50:50.410918] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:41.974 [2024-11-17 02:50:50.410933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.974 [2024-11-17 02:50:50.410948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.974 [2024-11-17 02:50:50.410958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.974 [2024-11-17 02:50:50.410976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:42.234 [2024-11-17 02:50:50.451219] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.234 [2024-11-17 02:50:50.451251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.234 [2024-11-17 02:50:50.451265] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.234 [2024-11-17 02:50:50.451277] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:42.234 [2024-11-17 02:50:50.451326] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.234 [2024-11-17 02:50:50.451347] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:42.234 [2024-11-17 02:50:50.451372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.234 [2024-11-17 02:50:50.451419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:42.234 [2024-11-17 02:50:50.451623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.234 [2024-11-17 02:50:50.451649] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.234 [2024-11-17 02:50:50.451662] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.234 [2024-11-17 02:50:50.451673] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=3072, cccid=4 00:30:42.234 [2024-11-17 02:50:50.451686] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=3072 00:30:42.234 [2024-11-17 02:50:50.451698] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.234 [2024-11-17 02:50:50.451717] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.234 [2024-11-17 02:50:50.451730] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.234 [2024-11-17 02:50:50.451773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.234 [2024-11-17 02:50:50.451794] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.234 [2024-11-17 02:50:50.451819] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.234 [2024-11-17 02:50:50.451832] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:42.234 [2024-11-17 02:50:50.451861] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.234 [2024-11-17 02:50:50.451878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:42.234 [2024-11-17 02:50:50.451921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.234 [2024-11-17 02:50:50.451987] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:42.234 [2024-11-17 02:50:50.452238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.234 [2024-11-17 02:50:50.452261] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.234 [2024-11-17 02:50:50.452274] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.234 [2024-11-17 02:50:50.452284] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8, cccid=4 00:30:42.234 [2024-11-17 02:50:50.452297] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=8 00:30:42.234 [2024-11-17 02:50:50.452308] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.234 [2024-11-17 02:50:50.452331] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.234 [2024-11-17 02:50:50.452345] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.234 [2024-11-17 02:50:50.497141] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.234 [2024-11-17 02:50:50.497185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.234 [2024-11-17 02:50:50.497199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.234 [2024-11-17 02:50:50.497212] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:42.234 ===================================================== 00:30:42.234 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:42.234 ===================================================== 00:30:42.234 Controller Capabilities/Features 00:30:42.235 ================================ 00:30:42.235 Vendor ID: 0000 00:30:42.235 Subsystem Vendor ID: 0000 00:30:42.235 Serial Number: .................... 00:30:42.235 Model Number: ........................................ 00:30:42.235 Firmware Version: 25.01 00:30:42.235 Recommended Arb Burst: 0 00:30:42.235 IEEE OUI Identifier: 00 00 00 00:30:42.235 Multi-path I/O 00:30:42.235 May have multiple subsystem ports: No 00:30:42.235 May have multiple controllers: No 00:30:42.235 Associated with SR-IOV VF: No 00:30:42.235 Max Data Transfer Size: 131072 00:30:42.235 Max Number of Namespaces: 0 00:30:42.235 Max Number of I/O Queues: 1024 00:30:42.235 NVMe Specification Version (VS): 1.3 00:30:42.235 NVMe Specification Version (Identify): 1.3 00:30:42.235 Maximum Queue Entries: 128 00:30:42.235 Contiguous Queues Required: Yes 00:30:42.235 Arbitration Mechanisms Supported 00:30:42.235 Weighted Round Robin: Not Supported 00:30:42.235 Vendor Specific: Not Supported 00:30:42.235 Reset Timeout: 15000 ms 00:30:42.235 Doorbell Stride: 4 bytes 00:30:42.235 NVM Subsystem Reset: Not Supported 00:30:42.235 Command Sets Supported 00:30:42.235 NVM Command Set: Supported 00:30:42.235 Boot Partition: Not Supported 00:30:42.235 Memory Page Size Minimum: 4096 bytes 00:30:42.235 Memory Page Size Maximum: 4096 bytes 00:30:42.235 Persistent Memory Region: Not Supported 00:30:42.235 Optional Asynchronous Events Supported 00:30:42.235 Namespace Attribute Notices: Not Supported 00:30:42.235 Firmware Activation Notices: Not Supported 00:30:42.235 ANA Change Notices: Not Supported 00:30:42.235 PLE Aggregate Log Change Notices: Not Supported 00:30:42.235 LBA Status Info Alert Notices: Not Supported 00:30:42.235 EGE Aggregate Log Change Notices: Not Supported 00:30:42.235 Normal NVM Subsystem Shutdown event: Not Supported 00:30:42.235 Zone Descriptor Change Notices: Not Supported 00:30:42.235 Discovery Log Change Notices: Supported 00:30:42.235 Controller Attributes 00:30:42.235 128-bit Host Identifier: Not Supported 00:30:42.235 Non-Operational Permissive Mode: Not Supported 00:30:42.235 NVM Sets: Not Supported 00:30:42.235 Read Recovery Levels: Not Supported 00:30:42.235 Endurance Groups: Not Supported 00:30:42.235 Predictable Latency Mode: Not Supported 00:30:42.235 Traffic Based Keep ALive: Not Supported 00:30:42.235 Namespace Granularity: Not Supported 00:30:42.235 SQ Associations: Not Supported 00:30:42.235 UUID List: Not Supported 00:30:42.235 Multi-Domain Subsystem: Not Supported 00:30:42.235 Fixed Capacity Management: Not Supported 00:30:42.235 Variable Capacity Management: Not Supported 00:30:42.235 Delete Endurance Group: Not Supported 00:30:42.235 Delete NVM Set: Not Supported 00:30:42.235 Extended LBA Formats Supported: Not Supported 00:30:42.235 Flexible Data Placement Supported: Not Supported 00:30:42.235 00:30:42.235 Controller Memory Buffer Support 00:30:42.235 ================================ 00:30:42.235 Supported: No 00:30:42.235 00:30:42.235 Persistent Memory Region Support 00:30:42.235 ================================ 00:30:42.235 Supported: No 00:30:42.235 00:30:42.235 Admin Command Set Attributes 00:30:42.235 ============================ 00:30:42.235 Security Send/Receive: Not Supported 00:30:42.235 Format NVM: Not Supported 00:30:42.235 Firmware Activate/Download: Not Supported 00:30:42.235 Namespace Management: Not Supported 00:30:42.235 Device Self-Test: Not Supported 00:30:42.235 Directives: Not Supported 00:30:42.235 NVMe-MI: Not Supported 00:30:42.235 Virtualization Management: Not Supported 00:30:42.235 Doorbell Buffer Config: Not Supported 00:30:42.235 Get LBA Status Capability: Not Supported 00:30:42.235 Command & Feature Lockdown Capability: Not Supported 00:30:42.235 Abort Command Limit: 1 00:30:42.235 Async Event Request Limit: 4 00:30:42.235 Number of Firmware Slots: N/A 00:30:42.235 Firmware Slot 1 Read-Only: N/A 00:30:42.235 Firmware Activation Without Reset: N/A 00:30:42.235 Multiple Update Detection Support: N/A 00:30:42.235 Firmware Update Granularity: No Information Provided 00:30:42.235 Per-Namespace SMART Log: No 00:30:42.235 Asymmetric Namespace Access Log Page: Not Supported 00:30:42.235 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:42.235 Command Effects Log Page: Not Supported 00:30:42.235 Get Log Page Extended Data: Supported 00:30:42.235 Telemetry Log Pages: Not Supported 00:30:42.235 Persistent Event Log Pages: Not Supported 00:30:42.235 Supported Log Pages Log Page: May Support 00:30:42.235 Commands Supported & Effects Log Page: Not Supported 00:30:42.235 Feature Identifiers & Effects Log Page:May Support 00:30:42.235 NVMe-MI Commands & Effects Log Page: May Support 00:30:42.235 Data Area 4 for Telemetry Log: Not Supported 00:30:42.235 Error Log Page Entries Supported: 128 00:30:42.235 Keep Alive: Not Supported 00:30:42.235 00:30:42.235 NVM Command Set Attributes 00:30:42.235 ========================== 00:30:42.235 Submission Queue Entry Size 00:30:42.235 Max: 1 00:30:42.235 Min: 1 00:30:42.235 Completion Queue Entry Size 00:30:42.235 Max: 1 00:30:42.235 Min: 1 00:30:42.235 Number of Namespaces: 0 00:30:42.235 Compare Command: Not Supported 00:30:42.235 Write Uncorrectable Command: Not Supported 00:30:42.235 Dataset Management Command: Not Supported 00:30:42.235 Write Zeroes Command: Not Supported 00:30:42.235 Set Features Save Field: Not Supported 00:30:42.235 Reservations: Not Supported 00:30:42.235 Timestamp: Not Supported 00:30:42.235 Copy: Not Supported 00:30:42.235 Volatile Write Cache: Not Present 00:30:42.235 Atomic Write Unit (Normal): 1 00:30:42.235 Atomic Write Unit (PFail): 1 00:30:42.235 Atomic Compare & Write Unit: 1 00:30:42.235 Fused Compare & Write: Supported 00:30:42.235 Scatter-Gather List 00:30:42.235 SGL Command Set: Supported 00:30:42.235 SGL Keyed: Supported 00:30:42.235 SGL Bit Bucket Descriptor: Not Supported 00:30:42.235 SGL Metadata Pointer: Not Supported 00:30:42.235 Oversized SGL: Not Supported 00:30:42.235 SGL Metadata Address: Not Supported 00:30:42.235 SGL Offset: Supported 00:30:42.235 Transport SGL Data Block: Not Supported 00:30:42.235 Replay Protected Memory Block: Not Supported 00:30:42.235 00:30:42.235 Firmware Slot Information 00:30:42.235 ========================= 00:30:42.235 Active slot: 0 00:30:42.235 00:30:42.235 00:30:42.235 Error Log 00:30:42.235 ========= 00:30:42.235 00:30:42.235 Active Namespaces 00:30:42.235 ================= 00:30:42.235 Discovery Log Page 00:30:42.235 ================== 00:30:42.235 Generation Counter: 2 00:30:42.235 Number of Records: 2 00:30:42.235 Record Format: 0 00:30:42.235 00:30:42.235 Discovery Log Entry 0 00:30:42.235 ---------------------- 00:30:42.235 Transport Type: 3 (TCP) 00:30:42.235 Address Family: 1 (IPv4) 00:30:42.235 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:42.236 Entry Flags: 00:30:42.236 Duplicate Returned Information: 1 00:30:42.236 Explicit Persistent Connection Support for Discovery: 1 00:30:42.236 Transport Requirements: 00:30:42.236 Secure Channel: Not Required 00:30:42.236 Port ID: 0 (0x0000) 00:30:42.236 Controller ID: 65535 (0xffff) 00:30:42.236 Admin Max SQ Size: 128 00:30:42.236 Transport Service Identifier: 4420 00:30:42.236 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:42.236 Transport Address: 10.0.0.2 00:30:42.236 Discovery Log Entry 1 00:30:42.236 ---------------------- 00:30:42.236 Transport Type: 3 (TCP) 00:30:42.236 Address Family: 1 (IPv4) 00:30:42.236 Subsystem Type: 2 (NVM Subsystem) 00:30:42.236 Entry Flags: 00:30:42.236 Duplicate Returned Information: 0 00:30:42.236 Explicit Persistent Connection Support for Discovery: 0 00:30:42.236 Transport Requirements: 00:30:42.236 Secure Channel: Not Required 00:30:42.236 Port ID: 0 (0x0000) 00:30:42.236 Controller ID: 65535 (0xffff) 00:30:42.236 Admin Max SQ Size: 128 00:30:42.236 Transport Service Identifier: 4420 00:30:42.236 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:42.236 Transport Address: 10.0.0.2 [2024-11-17 02:50:50.497413] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:30:42.236 [2024-11-17 02:50:50.497449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.236 [2024-11-17 02:50:50.497488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.236 [2024-11-17 02:50:50.497505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:42.236 [2024-11-17 02:50:50.497520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.236 [2024-11-17 02:50:50.497547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:42.236 [2024-11-17 02:50:50.497561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.236 [2024-11-17 02:50:50.497573] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.236 [2024-11-17 02:50:50.497586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.236 [2024-11-17 02:50:50.497608] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.236 [2024-11-17 02:50:50.497622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.236 [2024-11-17 02:50:50.497634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.236 [2024-11-17 02:50:50.497653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.236 [2024-11-17 02:50:50.497689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.236 [2024-11-17 02:50:50.497838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.236 [2024-11-17 02:50:50.497863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.236 [2024-11-17 02:50:50.497876] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.236 [2024-11-17 02:50:50.497888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.236 [2024-11-17 02:50:50.497912] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.236 [2024-11-17 02:50:50.497926] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.236 [2024-11-17 02:50:50.497938] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.236 [2024-11-17 02:50:50.497969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.236 [2024-11-17 02:50:50.498029] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.236 [2024-11-17 02:50:50.498235] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.236 [2024-11-17 02:50:50.498258] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.236 [2024-11-17 02:50:50.498270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.236 [2024-11-17 02:50:50.498281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.236 [2024-11-17 02:50:50.498302] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:30:42.236 [2024-11-17 02:50:50.498321] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:30:42.236 [2024-11-17 02:50:50.498362] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.236 [2024-11-17 02:50:50.498378] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.236 [2024-11-17 02:50:50.498391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.236 [2024-11-17 02:50:50.498410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.236 [2024-11-17 02:50:50.498443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.236 [2024-11-17 02:50:50.498584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.236 [2024-11-17 02:50:50.498621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.236 [2024-11-17 02:50:50.498634] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.236 [2024-11-17 02:50:50.498645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.236 [2024-11-17 02:50:50.498676] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.236 [2024-11-17 02:50:50.498693] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.236 [2024-11-17 02:50:50.498704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.236 [2024-11-17 02:50:50.498723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.236 [2024-11-17 02:50:50.498770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.236 [2024-11-17 02:50:50.498910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.236 [2024-11-17 02:50:50.498932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.236 [2024-11-17 02:50:50.498945] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.236 [2024-11-17 02:50:50.498956] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.236 [2024-11-17 02:50:50.498985] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.236 [2024-11-17 02:50:50.499001] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.236 [2024-11-17 02:50:50.499012] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.236 [2024-11-17 02:50:50.499031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.236 [2024-11-17 02:50:50.499061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.236 [2024-11-17 02:50:50.499188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.236 [2024-11-17 02:50:50.499210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.236 [2024-11-17 02:50:50.499222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.236 [2024-11-17 02:50:50.499233] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.236 [2024-11-17 02:50:50.499267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.236 [2024-11-17 02:50:50.499284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.236 [2024-11-17 02:50:50.499295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.236 [2024-11-17 02:50:50.499313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.236 [2024-11-17 02:50:50.499345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.236 [2024-11-17 02:50:50.499491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.236 [2024-11-17 02:50:50.499520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.236 [2024-11-17 02:50:50.499533] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.236 [2024-11-17 02:50:50.499545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.236 [2024-11-17 02:50:50.499574] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.236 [2024-11-17 02:50:50.499590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.236 [2024-11-17 02:50:50.499601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.236 [2024-11-17 02:50:50.499619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.236 [2024-11-17 02:50:50.499664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.236 [2024-11-17 02:50:50.499808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.237 [2024-11-17 02:50:50.499829] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.237 [2024-11-17 02:50:50.499856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.237 [2024-11-17 02:50:50.499867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.237 [2024-11-17 02:50:50.499897] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.237 [2024-11-17 02:50:50.499913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.237 [2024-11-17 02:50:50.499924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.237 [2024-11-17 02:50:50.499943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.237 [2024-11-17 02:50:50.499973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.237 [2024-11-17 02:50:50.500123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.237 [2024-11-17 02:50:50.500153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.237 [2024-11-17 02:50:50.500165] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.237 [2024-11-17 02:50:50.500176] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.237 [2024-11-17 02:50:50.500205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.237 [2024-11-17 02:50:50.500221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.237 [2024-11-17 02:50:50.500232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.237 [2024-11-17 02:50:50.500250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.237 [2024-11-17 02:50:50.500281] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.237 [2024-11-17 02:50:50.500396] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.237 [2024-11-17 02:50:50.500420] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.237 [2024-11-17 02:50:50.500433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.237 [2024-11-17 02:50:50.500444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.237 [2024-11-17 02:50:50.500478] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.237 [2024-11-17 02:50:50.500494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.237 [2024-11-17 02:50:50.500505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.237 [2024-11-17 02:50:50.500529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.237 [2024-11-17 02:50:50.500562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.237 [2024-11-17 02:50:50.500705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.237 [2024-11-17 02:50:50.500726] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.237 [2024-11-17 02:50:50.500737] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.237 [2024-11-17 02:50:50.500748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.237 [2024-11-17 02:50:50.500778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.237 [2024-11-17 02:50:50.500793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.237 [2024-11-17 02:50:50.500805] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.237 [2024-11-17 02:50:50.500823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.237 [2024-11-17 02:50:50.500853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.237 [2024-11-17 02:50:50.501011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.237 [2024-11-17 02:50:50.501032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.237 [2024-11-17 02:50:50.501045] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.237 [2024-11-17 02:50:50.501070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.237 [2024-11-17 02:50:50.505129] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.237 [2024-11-17 02:50:50.505150] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.237 [2024-11-17 02:50:50.505162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.237 [2024-11-17 02:50:50.505180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.237 [2024-11-17 02:50:50.505227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.237 [2024-11-17 02:50:50.505373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.237 [2024-11-17 02:50:50.505395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.237 [2024-11-17 02:50:50.505411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.237 [2024-11-17 02:50:50.505423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.237 [2024-11-17 02:50:50.505447] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:30:42.237 00:30:42.237 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:42.237 [2024-11-17 02:50:50.609634] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:42.237 [2024-11-17 02:50:50.609732] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072353 ] 00:30:42.237 [2024-11-17 02:50:50.691240] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:30:42.237 [2024-11-17 02:50:50.691386] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:42.237 [2024-11-17 02:50:50.691425] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:42.237 [2024-11-17 02:50:50.691472] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:42.237 [2024-11-17 02:50:50.691501] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:42.237 [2024-11-17 02:50:50.692454] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:30:42.237 [2024-11-17 02:50:50.692540] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:42.498 [2024-11-17 02:50:50.706139] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:42.498 [2024-11-17 02:50:50.706178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:42.498 [2024-11-17 02:50:50.706197] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:42.498 [2024-11-17 02:50:50.706209] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:42.498 [2024-11-17 02:50:50.706290] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.498 [2024-11-17 02:50:50.706312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.499 [2024-11-17 02:50:50.706332] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.499 [2024-11-17 02:50:50.706370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:42.499 [2024-11-17 02:50:50.706412] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.499 [2024-11-17 02:50:50.714129] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.499 [2024-11-17 02:50:50.714160] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.499 [2024-11-17 02:50:50.714175] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.499 [2024-11-17 02:50:50.714195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.499 [2024-11-17 02:50:50.714228] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:42.499 [2024-11-17 02:50:50.714253] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:30:42.499 [2024-11-17 02:50:50.714272] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:30:42.499 [2024-11-17 02:50:50.714308] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.499 [2024-11-17 02:50:50.714324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.499 [2024-11-17 02:50:50.714336] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.499 [2024-11-17 02:50:50.714358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.499 [2024-11-17 02:50:50.714411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.499 [2024-11-17 02:50:50.714600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.499 [2024-11-17 02:50:50.714624] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.499 [2024-11-17 02:50:50.714648] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.499 [2024-11-17 02:50:50.714670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.499 [2024-11-17 02:50:50.714701] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:30:42.499 [2024-11-17 02:50:50.714727] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:30:42.499 [2024-11-17 02:50:50.714758] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.499 [2024-11-17 02:50:50.714774] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.499 [2024-11-17 02:50:50.714786] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.499 [2024-11-17 02:50:50.714811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.499 [2024-11-17 02:50:50.714863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.499 [2024-11-17 02:50:50.715049] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.499 [2024-11-17 02:50:50.715071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.499 [2024-11-17 02:50:50.715084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.499 [2024-11-17 02:50:50.715105] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.499 [2024-11-17 02:50:50.715129] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:30:42.499 [2024-11-17 02:50:50.715156] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:42.499 [2024-11-17 02:50:50.715178] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.499 [2024-11-17 02:50:50.715202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.499 [2024-11-17 02:50:50.715214] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.499 [2024-11-17 02:50:50.715234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.499 [2024-11-17 02:50:50.715269] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.499 [2024-11-17 02:50:50.715373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.499 [2024-11-17 02:50:50.715395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.499 [2024-11-17 02:50:50.715407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.499 [2024-11-17 02:50:50.715418] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.499 [2024-11-17 02:50:50.715434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:42.499 [2024-11-17 02:50:50.715470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.499 [2024-11-17 02:50:50.715488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.499 [2024-11-17 02:50:50.715500] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.499 [2024-11-17 02:50:50.715520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.499 [2024-11-17 02:50:50.715552] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.499 [2024-11-17 02:50:50.715644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.499 [2024-11-17 02:50:50.715665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.499 [2024-11-17 02:50:50.715676] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.499 [2024-11-17 02:50:50.715687] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.499 [2024-11-17 02:50:50.715704] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:42.499 [2024-11-17 02:50:50.715727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:42.499 [2024-11-17 02:50:50.715754] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:42.499 [2024-11-17 02:50:50.715874] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:30:42.499 [2024-11-17 02:50:50.715894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:42.499 [2024-11-17 02:50:50.715937] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.499 [2024-11-17 02:50:50.715951] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.499 [2024-11-17 02:50:50.715963] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.499 [2024-11-17 02:50:50.715982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.499 [2024-11-17 02:50:50.716014] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.499 [2024-11-17 02:50:50.716184] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.499 [2024-11-17 02:50:50.716207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.499 [2024-11-17 02:50:50.716219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.499 [2024-11-17 02:50:50.716230] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.499 [2024-11-17 02:50:50.716247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:42.499 [2024-11-17 02:50:50.716283] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.499 [2024-11-17 02:50:50.716300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.499 [2024-11-17 02:50:50.716321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.499 [2024-11-17 02:50:50.716343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.499 [2024-11-17 02:50:50.716377] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.499 [2024-11-17 02:50:50.716491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.499 [2024-11-17 02:50:50.716513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.499 [2024-11-17 02:50:50.716525] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.499 [2024-11-17 02:50:50.716536] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.499 [2024-11-17 02:50:50.716556] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:42.500 [2024-11-17 02:50:50.716573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:42.500 [2024-11-17 02:50:50.716596] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:30:42.500 [2024-11-17 02:50:50.716623] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:42.500 [2024-11-17 02:50:50.716659] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.500 [2024-11-17 02:50:50.716679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.500 [2024-11-17 02:50:50.716701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.500 [2024-11-17 02:50:50.716747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.500 [2024-11-17 02:50:50.717039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.500 [2024-11-17 02:50:50.717061] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.500 [2024-11-17 02:50:50.717074] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.500 [2024-11-17 02:50:50.717087] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:42.500 [2024-11-17 02:50:50.717121] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:42.500 [2024-11-17 02:50:50.717149] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.500 [2024-11-17 02:50:50.717188] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.500 [2024-11-17 02:50:50.717208] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.500 [2024-11-17 02:50:50.758139] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.500 [2024-11-17 02:50:50.758169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.500 [2024-11-17 02:50:50.758182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.500 [2024-11-17 02:50:50.758195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.500 [2024-11-17 02:50:50.758223] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:30:42.500 [2024-11-17 02:50:50.758242] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:30:42.500 [2024-11-17 02:50:50.758262] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:30:42.500 [2024-11-17 02:50:50.758284] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:30:42.500 [2024-11-17 02:50:50.758300] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:30:42.500 [2024-11-17 02:50:50.758313] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:30:42.500 [2024-11-17 02:50:50.758347] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:42.500 [2024-11-17 02:50:50.758372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.500 [2024-11-17 02:50:50.758387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.500 [2024-11-17 02:50:50.758399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.500 [2024-11-17 02:50:50.758426] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:42.500 [2024-11-17 02:50:50.758479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.500 [2024-11-17 02:50:50.758612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.500 [2024-11-17 02:50:50.758635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.500 [2024-11-17 02:50:50.758647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.500 [2024-11-17 02:50:50.758659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.500 [2024-11-17 02:50:50.758682] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.500 [2024-11-17 02:50:50.758704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.500 [2024-11-17 02:50:50.758716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:42.500 [2024-11-17 02:50:50.758741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.500 [2024-11-17 02:50:50.758762] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.500 [2024-11-17 02:50:50.758776] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.500 [2024-11-17 02:50:50.758786] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:42.500 [2024-11-17 02:50:50.758804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.500 [2024-11-17 02:50:50.758822] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.500 [2024-11-17 02:50:50.758834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.500 [2024-11-17 02:50:50.758850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:42.500 [2024-11-17 02:50:50.758868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.500 [2024-11-17 02:50:50.758885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.500 [2024-11-17 02:50:50.758897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.500 [2024-11-17 02:50:50.758923] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.500 [2024-11-17 02:50:50.758944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.500 [2024-11-17 02:50:50.758961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:42.500 [2024-11-17 02:50:50.759005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:42.500 [2024-11-17 02:50:50.759027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.500 [2024-11-17 02:50:50.759055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:42.500 [2024-11-17 02:50:50.759075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.500 [2024-11-17 02:50:50.759134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:42.500 [2024-11-17 02:50:50.759155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:42.500 [2024-11-17 02:50:50.759168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:42.500 [2024-11-17 02:50:50.759181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.500 [2024-11-17 02:50:50.759193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:42.500 [2024-11-17 02:50:50.759365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.500 [2024-11-17 02:50:50.759388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.500 [2024-11-17 02:50:50.759409] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.500 [2024-11-17 02:50:50.759422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:42.500 [2024-11-17 02:50:50.759441] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:30:42.500 [2024-11-17 02:50:50.759463] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:42.500 [2024-11-17 02:50:50.759488] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:30:42.500 [2024-11-17 02:50:50.759509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:42.500 [2024-11-17 02:50:50.759528] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.500 [2024-11-17 02:50:50.759542] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.500 [2024-11-17 02:50:50.759554] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:42.500 [2024-11-17 02:50:50.759575] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:42.500 [2024-11-17 02:50:50.759623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:42.500 [2024-11-17 02:50:50.759836] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.500 [2024-11-17 02:50:50.759859] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.500 [2024-11-17 02:50:50.759875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.500 [2024-11-17 02:50:50.759888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:42.501 [2024-11-17 02:50:50.759993] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:30:42.501 [2024-11-17 02:50:50.760036] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:42.501 [2024-11-17 02:50:50.760082] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.760112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:42.501 [2024-11-17 02:50:50.760147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.501 [2024-11-17 02:50:50.760183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:42.501 [2024-11-17 02:50:50.760377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.501 [2024-11-17 02:50:50.760399] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.501 [2024-11-17 02:50:50.760411] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.760422] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:42.501 [2024-11-17 02:50:50.760434] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:42.501 [2024-11-17 02:50:50.760446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.760470] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.760485] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.760504] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.501 [2024-11-17 02:50:50.760522] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.501 [2024-11-17 02:50:50.760533] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.760544] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:42.501 [2024-11-17 02:50:50.760596] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:30:42.501 [2024-11-17 02:50:50.760631] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:30:42.501 [2024-11-17 02:50:50.760673] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:30:42.501 [2024-11-17 02:50:50.760719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.760734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:42.501 [2024-11-17 02:50:50.760754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.501 [2024-11-17 02:50:50.760803] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:42.501 [2024-11-17 02:50:50.761022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.501 [2024-11-17 02:50:50.761044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.501 [2024-11-17 02:50:50.761056] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.761067] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:42.501 [2024-11-17 02:50:50.761080] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:42.501 [2024-11-17 02:50:50.761092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.761125] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.761140] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.761159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.501 [2024-11-17 02:50:50.761177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.501 [2024-11-17 02:50:50.761188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.761199] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:42.501 [2024-11-17 02:50:50.761244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:42.501 [2024-11-17 02:50:50.761282] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:42.501 [2024-11-17 02:50:50.761310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.761340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:42.501 [2024-11-17 02:50:50.761361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.501 [2024-11-17 02:50:50.761394] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:42.501 [2024-11-17 02:50:50.761571] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.501 [2024-11-17 02:50:50.761592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.501 [2024-11-17 02:50:50.761618] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.761630] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:42.501 [2024-11-17 02:50:50.761642] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:42.501 [2024-11-17 02:50:50.761654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.761673] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.761686] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.761705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.501 [2024-11-17 02:50:50.761722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.501 [2024-11-17 02:50:50.761733] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.761745] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:42.501 [2024-11-17 02:50:50.761776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:42.501 [2024-11-17 02:50:50.761803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:30:42.501 [2024-11-17 02:50:50.761828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:30:42.501 [2024-11-17 02:50:50.761850] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:42.501 [2024-11-17 02:50:50.761866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:42.501 [2024-11-17 02:50:50.761897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:30:42.501 [2024-11-17 02:50:50.761913] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:30:42.501 [2024-11-17 02:50:50.761932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:30:42.501 [2024-11-17 02:50:50.761951] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:30:42.501 [2024-11-17 02:50:50.762008] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.762025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:42.501 [2024-11-17 02:50:50.762046] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.501 [2024-11-17 02:50:50.762089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.766126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.766141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:42.501 [2024-11-17 02:50:50.766177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.501 [2024-11-17 02:50:50.766215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:42.501 [2024-11-17 02:50:50.766235] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:42.501 [2024-11-17 02:50:50.766410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.501 [2024-11-17 02:50:50.766436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.501 [2024-11-17 02:50:50.766450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.766463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:42.501 [2024-11-17 02:50:50.766483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.501 [2024-11-17 02:50:50.766500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.501 [2024-11-17 02:50:50.766512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.501 [2024-11-17 02:50:50.766523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:42.501 [2024-11-17 02:50:50.766549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.766565] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:42.502 [2024-11-17 02:50:50.766586] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.502 [2024-11-17 02:50:50.766619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:42.502 [2024-11-17 02:50:50.766726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.502 [2024-11-17 02:50:50.766747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.502 [2024-11-17 02:50:50.766759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.766770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:42.502 [2024-11-17 02:50:50.766797] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.766814] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:42.502 [2024-11-17 02:50:50.766838] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.502 [2024-11-17 02:50:50.766872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:42.502 [2024-11-17 02:50:50.766978] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.502 [2024-11-17 02:50:50.767000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.502 [2024-11-17 02:50:50.767012] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.767023] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:42.502 [2024-11-17 02:50:50.767054] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.767071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:42.502 [2024-11-17 02:50:50.767090] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.502 [2024-11-17 02:50:50.767133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:42.502 [2024-11-17 02:50:50.767237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.502 [2024-11-17 02:50:50.767257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.502 [2024-11-17 02:50:50.767269] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.767281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:42.502 [2024-11-17 02:50:50.767327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.767346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:42.502 [2024-11-17 02:50:50.767367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.502 [2024-11-17 02:50:50.767391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.767407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:42.502 [2024-11-17 02:50:50.767426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.502 [2024-11-17 02:50:50.767450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.767465] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000015700) 00:30:42.502 [2024-11-17 02:50:50.767490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.502 [2024-11-17 02:50:50.767522] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.767539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:42.502 [2024-11-17 02:50:50.767558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.502 [2024-11-17 02:50:50.767607] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:42.502 [2024-11-17 02:50:50.767625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:42.502 [2024-11-17 02:50:50.767653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:30:42.502 [2024-11-17 02:50:50.767666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:42.502 [2024-11-17 02:50:50.768066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.502 [2024-11-17 02:50:50.768090] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.502 [2024-11-17 02:50:50.768112] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.768124] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8192, cccid=5 00:30:42.502 [2024-11-17 02:50:50.768138] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000015700): expected_datao=0, payload_size=8192 00:30:42.502 [2024-11-17 02:50:50.768150] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.768191] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.768207] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.768228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.502 [2024-11-17 02:50:50.768250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.502 [2024-11-17 02:50:50.768263] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.768274] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=4 00:30:42.502 [2024-11-17 02:50:50.768287] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:42.502 [2024-11-17 02:50:50.768298] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.768325] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.768340] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.768355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.502 [2024-11-17 02:50:50.768371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.502 [2024-11-17 02:50:50.768383] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.768393] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=6 00:30:42.502 [2024-11-17 02:50:50.768405] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:42.502 [2024-11-17 02:50:50.768432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.768449] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.768461] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.768475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:42.502 [2024-11-17 02:50:50.768505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:42.502 [2024-11-17 02:50:50.768516] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.768526] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=7 00:30:42.502 [2024-11-17 02:50:50.768537] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:42.502 [2024-11-17 02:50:50.768548] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.768564] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.768575] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.768589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.502 [2024-11-17 02:50:50.768604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.502 [2024-11-17 02:50:50.768614] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.768626] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:42.502 [2024-11-17 02:50:50.768663] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.502 [2024-11-17 02:50:50.768681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.502 [2024-11-17 02:50:50.768692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.768703] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:42.502 [2024-11-17 02:50:50.768732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.502 [2024-11-17 02:50:50.768750] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.502 [2024-11-17 02:50:50.768761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.502 [2024-11-17 02:50:50.768772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000015700 00:30:42.502 [2024-11-17 02:50:50.768791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.503 [2024-11-17 02:50:50.768808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.503 [2024-11-17 02:50:50.768819] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.503 [2024-11-17 02:50:50.768833] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:42.503 ===================================================== 00:30:42.503 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:42.503 ===================================================== 00:30:42.503 Controller Capabilities/Features 00:30:42.503 ================================ 00:30:42.503 Vendor ID: 8086 00:30:42.503 Subsystem Vendor ID: 8086 00:30:42.503 Serial Number: SPDK00000000000001 00:30:42.503 Model Number: SPDK bdev Controller 00:30:42.503 Firmware Version: 25.01 00:30:42.503 Recommended Arb Burst: 6 00:30:42.503 IEEE OUI Identifier: e4 d2 5c 00:30:42.503 Multi-path I/O 00:30:42.503 May have multiple subsystem ports: Yes 00:30:42.503 May have multiple controllers: Yes 00:30:42.503 Associated with SR-IOV VF: No 00:30:42.503 Max Data Transfer Size: 131072 00:30:42.503 Max Number of Namespaces: 32 00:30:42.503 Max Number of I/O Queues: 127 00:30:42.503 NVMe Specification Version (VS): 1.3 00:30:42.503 NVMe Specification Version (Identify): 1.3 00:30:42.503 Maximum Queue Entries: 128 00:30:42.503 Contiguous Queues Required: Yes 00:30:42.503 Arbitration Mechanisms Supported 00:30:42.503 Weighted Round Robin: Not Supported 00:30:42.503 Vendor Specific: Not Supported 00:30:42.503 Reset Timeout: 15000 ms 00:30:42.503 Doorbell Stride: 4 bytes 00:30:42.503 NVM Subsystem Reset: Not Supported 00:30:42.503 Command Sets Supported 00:30:42.503 NVM Command Set: Supported 00:30:42.503 Boot Partition: Not Supported 00:30:42.503 Memory Page Size Minimum: 4096 bytes 00:30:42.503 Memory Page Size Maximum: 4096 bytes 00:30:42.503 Persistent Memory Region: Not Supported 00:30:42.503 Optional Asynchronous Events Supported 00:30:42.503 Namespace Attribute Notices: Supported 00:30:42.503 Firmware Activation Notices: Not Supported 00:30:42.503 ANA Change Notices: Not Supported 00:30:42.503 PLE Aggregate Log Change Notices: Not Supported 00:30:42.503 LBA Status Info Alert Notices: Not Supported 00:30:42.503 EGE Aggregate Log Change Notices: Not Supported 00:30:42.503 Normal NVM Subsystem Shutdown event: Not Supported 00:30:42.503 Zone Descriptor Change Notices: Not Supported 00:30:42.503 Discovery Log Change Notices: Not Supported 00:30:42.503 Controller Attributes 00:30:42.503 128-bit Host Identifier: Supported 00:30:42.503 Non-Operational Permissive Mode: Not Supported 00:30:42.503 NVM Sets: Not Supported 00:30:42.503 Read Recovery Levels: Not Supported 00:30:42.503 Endurance Groups: Not Supported 00:30:42.503 Predictable Latency Mode: Not Supported 00:30:42.503 Traffic Based Keep ALive: Not Supported 00:30:42.503 Namespace Granularity: Not Supported 00:30:42.503 SQ Associations: Not Supported 00:30:42.503 UUID List: Not Supported 00:30:42.503 Multi-Domain Subsystem: Not Supported 00:30:42.503 Fixed Capacity Management: Not Supported 00:30:42.503 Variable Capacity Management: Not Supported 00:30:42.503 Delete Endurance Group: Not Supported 00:30:42.503 Delete NVM Set: Not Supported 00:30:42.503 Extended LBA Formats Supported: Not Supported 00:30:42.503 Flexible Data Placement Supported: Not Supported 00:30:42.503 00:30:42.503 Controller Memory Buffer Support 00:30:42.503 ================================ 00:30:42.503 Supported: No 00:30:42.503 00:30:42.503 Persistent Memory Region Support 00:30:42.503 ================================ 00:30:42.503 Supported: No 00:30:42.503 00:30:42.503 Admin Command Set Attributes 00:30:42.503 ============================ 00:30:42.503 Security Send/Receive: Not Supported 00:30:42.503 Format NVM: Not Supported 00:30:42.503 Firmware Activate/Download: Not Supported 00:30:42.503 Namespace Management: Not Supported 00:30:42.503 Device Self-Test: Not Supported 00:30:42.503 Directives: Not Supported 00:30:42.503 NVMe-MI: Not Supported 00:30:42.503 Virtualization Management: Not Supported 00:30:42.503 Doorbell Buffer Config: Not Supported 00:30:42.503 Get LBA Status Capability: Not Supported 00:30:42.503 Command & Feature Lockdown Capability: Not Supported 00:30:42.503 Abort Command Limit: 4 00:30:42.503 Async Event Request Limit: 4 00:30:42.503 Number of Firmware Slots: N/A 00:30:42.503 Firmware Slot 1 Read-Only: N/A 00:30:42.503 Firmware Activation Without Reset: N/A 00:30:42.503 Multiple Update Detection Support: N/A 00:30:42.503 Firmware Update Granularity: No Information Provided 00:30:42.503 Per-Namespace SMART Log: No 00:30:42.503 Asymmetric Namespace Access Log Page: Not Supported 00:30:42.503 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:42.503 Command Effects Log Page: Supported 00:30:42.503 Get Log Page Extended Data: Supported 00:30:42.503 Telemetry Log Pages: Not Supported 00:30:42.503 Persistent Event Log Pages: Not Supported 00:30:42.503 Supported Log Pages Log Page: May Support 00:30:42.503 Commands Supported & Effects Log Page: Not Supported 00:30:42.503 Feature Identifiers & Effects Log Page:May Support 00:30:42.503 NVMe-MI Commands & Effects Log Page: May Support 00:30:42.503 Data Area 4 for Telemetry Log: Not Supported 00:30:42.503 Error Log Page Entries Supported: 128 00:30:42.503 Keep Alive: Supported 00:30:42.503 Keep Alive Granularity: 10000 ms 00:30:42.503 00:30:42.503 NVM Command Set Attributes 00:30:42.503 ========================== 00:30:42.503 Submission Queue Entry Size 00:30:42.503 Max: 64 00:30:42.503 Min: 64 00:30:42.503 Completion Queue Entry Size 00:30:42.503 Max: 16 00:30:42.503 Min: 16 00:30:42.503 Number of Namespaces: 32 00:30:42.503 Compare Command: Supported 00:30:42.503 Write Uncorrectable Command: Not Supported 00:30:42.503 Dataset Management Command: Supported 00:30:42.503 Write Zeroes Command: Supported 00:30:42.503 Set Features Save Field: Not Supported 00:30:42.503 Reservations: Supported 00:30:42.503 Timestamp: Not Supported 00:30:42.503 Copy: Supported 00:30:42.503 Volatile Write Cache: Present 00:30:42.503 Atomic Write Unit (Normal): 1 00:30:42.503 Atomic Write Unit (PFail): 1 00:30:42.503 Atomic Compare & Write Unit: 1 00:30:42.503 Fused Compare & Write: Supported 00:30:42.503 Scatter-Gather List 00:30:42.503 SGL Command Set: Supported 00:30:42.503 SGL Keyed: Supported 00:30:42.503 SGL Bit Bucket Descriptor: Not Supported 00:30:42.503 SGL Metadata Pointer: Not Supported 00:30:42.503 Oversized SGL: Not Supported 00:30:42.503 SGL Metadata Address: Not Supported 00:30:42.503 SGL Offset: Supported 00:30:42.503 Transport SGL Data Block: Not Supported 00:30:42.503 Replay Protected Memory Block: Not Supported 00:30:42.503 00:30:42.503 Firmware Slot Information 00:30:42.503 ========================= 00:30:42.503 Active slot: 1 00:30:42.503 Slot 1 Firmware Revision: 25.01 00:30:42.503 00:30:42.503 00:30:42.503 Commands Supported and Effects 00:30:42.503 ============================== 00:30:42.503 Admin Commands 00:30:42.504 -------------- 00:30:42.504 Get Log Page (02h): Supported 00:30:42.504 Identify (06h): Supported 00:30:42.504 Abort (08h): Supported 00:30:42.504 Set Features (09h): Supported 00:30:42.504 Get Features (0Ah): Supported 00:30:42.504 Asynchronous Event Request (0Ch): Supported 00:30:42.504 Keep Alive (18h): Supported 00:30:42.504 I/O Commands 00:30:42.504 ------------ 00:30:42.504 Flush (00h): Supported LBA-Change 00:30:42.504 Write (01h): Supported LBA-Change 00:30:42.504 Read (02h): Supported 00:30:42.504 Compare (05h): Supported 00:30:42.504 Write Zeroes (08h): Supported LBA-Change 00:30:42.504 Dataset Management (09h): Supported LBA-Change 00:30:42.504 Copy (19h): Supported LBA-Change 00:30:42.504 00:30:42.504 Error Log 00:30:42.504 ========= 00:30:42.504 00:30:42.504 Arbitration 00:30:42.504 =========== 00:30:42.504 Arbitration Burst: 1 00:30:42.504 00:30:42.504 Power Management 00:30:42.504 ================ 00:30:42.504 Number of Power States: 1 00:30:42.504 Current Power State: Power State #0 00:30:42.504 Power State #0: 00:30:42.504 Max Power: 0.00 W 00:30:42.504 Non-Operational State: Operational 00:30:42.504 Entry Latency: Not Reported 00:30:42.504 Exit Latency: Not Reported 00:30:42.504 Relative Read Throughput: 0 00:30:42.504 Relative Read Latency: 0 00:30:42.504 Relative Write Throughput: 0 00:30:42.504 Relative Write Latency: 0 00:30:42.504 Idle Power: Not Reported 00:30:42.504 Active Power: Not Reported 00:30:42.504 Non-Operational Permissive Mode: Not Supported 00:30:42.504 00:30:42.504 Health Information 00:30:42.504 ================== 00:30:42.504 Critical Warnings: 00:30:42.504 Available Spare Space: OK 00:30:42.504 Temperature: OK 00:30:42.504 Device Reliability: OK 00:30:42.504 Read Only: No 00:30:42.504 Volatile Memory Backup: OK 00:30:42.504 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:42.504 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:42.504 Available Spare: 0% 00:30:42.504 Available Spare Threshold: 0% 00:30:42.504 Life Percentage Used:[2024-11-17 02:50:50.769061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.504 [2024-11-17 02:50:50.769081] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:42.504 [2024-11-17 02:50:50.769112] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.504 [2024-11-17 02:50:50.769164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:42.504 [2024-11-17 02:50:50.769313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.504 [2024-11-17 02:50:50.769336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.504 [2024-11-17 02:50:50.769349] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.504 [2024-11-17 02:50:50.769368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:42.504 [2024-11-17 02:50:50.769450] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:30:42.504 [2024-11-17 02:50:50.769483] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:42.504 [2024-11-17 02:50:50.769506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.504 [2024-11-17 02:50:50.769523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:42.504 [2024-11-17 02:50:50.769537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.504 [2024-11-17 02:50:50.769550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:42.504 [2024-11-17 02:50:50.769580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.504 [2024-11-17 02:50:50.769594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.504 [2024-11-17 02:50:50.769607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.504 [2024-11-17 02:50:50.769629] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.504 [2024-11-17 02:50:50.769644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.504 [2024-11-17 02:50:50.769656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.504 [2024-11-17 02:50:50.769676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.504 [2024-11-17 02:50:50.769712] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.504 [2024-11-17 02:50:50.769875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.504 [2024-11-17 02:50:50.769904] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.504 [2024-11-17 02:50:50.769917] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.504 [2024-11-17 02:50:50.769930] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.504 [2024-11-17 02:50:50.769953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.504 [2024-11-17 02:50:50.769969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.504 [2024-11-17 02:50:50.769981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.504 [2024-11-17 02:50:50.770002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.504 [2024-11-17 02:50:50.770043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.504 [2024-11-17 02:50:50.774136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.504 [2024-11-17 02:50:50.774166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.504 [2024-11-17 02:50:50.774180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.504 [2024-11-17 02:50:50.774191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.504 [2024-11-17 02:50:50.774207] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:30:42.504 [2024-11-17 02:50:50.774222] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:30:42.504 [2024-11-17 02:50:50.774256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:42.504 [2024-11-17 02:50:50.774273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:42.504 [2024-11-17 02:50:50.774285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:42.504 [2024-11-17 02:50:50.774305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.505 [2024-11-17 02:50:50.774338] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:42.505 [2024-11-17 02:50:50.774485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:42.505 [2024-11-17 02:50:50.774507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:42.505 [2024-11-17 02:50:50.774519] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:42.505 [2024-11-17 02:50:50.774531] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:42.505 [2024-11-17 02:50:50.774555] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 0 milliseconds 00:30:42.505 0% 00:30:42.505 Data Units Read: 0 00:30:42.505 Data Units Written: 0 00:30:42.505 Host Read Commands: 0 00:30:42.505 Host Write Commands: 0 00:30:42.505 Controller Busy Time: 0 minutes 00:30:42.505 Power Cycles: 0 00:30:42.505 Power On Hours: 0 hours 00:30:42.505 Unsafe Shutdowns: 0 00:30:42.505 Unrecoverable Media Errors: 0 00:30:42.505 Lifetime Error Log Entries: 0 00:30:42.505 Warning Temperature Time: 0 minutes 00:30:42.505 Critical Temperature Time: 0 minutes 00:30:42.505 00:30:42.505 Number of Queues 00:30:42.505 ================ 00:30:42.505 Number of I/O Submission Queues: 127 00:30:42.505 Number of I/O Completion Queues: 127 00:30:42.505 00:30:42.505 Active Namespaces 00:30:42.505 ================= 00:30:42.505 Namespace ID:1 00:30:42.505 Error Recovery Timeout: Unlimited 00:30:42.505 Command Set Identifier: NVM (00h) 00:30:42.505 Deallocate: Supported 00:30:42.505 Deallocated/Unwritten Error: Not Supported 00:30:42.505 Deallocated Read Value: Unknown 00:30:42.505 Deallocate in Write Zeroes: Not Supported 00:30:42.505 Deallocated Guard Field: 0xFFFF 00:30:42.505 Flush: Supported 00:30:42.505 Reservation: Supported 00:30:42.505 Namespace Sharing Capabilities: Multiple Controllers 00:30:42.505 Size (in LBAs): 131072 (0GiB) 00:30:42.505 Capacity (in LBAs): 131072 (0GiB) 00:30:42.505 Utilization (in LBAs): 131072 (0GiB) 00:30:42.505 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:42.505 EUI64: ABCDEF0123456789 00:30:42.505 UUID: cd60de69-c06f-4bd9-b0e9-268ede35ed2d 00:30:42.505 Thin Provisioning: Not Supported 00:30:42.505 Per-NS Atomic Units: Yes 00:30:42.505 Atomic Boundary Size (Normal): 0 00:30:42.505 Atomic Boundary Size (PFail): 0 00:30:42.505 Atomic Boundary Offset: 0 00:30:42.505 Maximum Single Source Range Length: 65535 00:30:42.505 Maximum Copy Length: 65535 00:30:42.505 Maximum Source Range Count: 1 00:30:42.505 NGUID/EUI64 Never Reused: No 00:30:42.505 Namespace Write Protected: No 00:30:42.505 Number of LBA Formats: 1 00:30:42.505 Current LBA Format: LBA Format #00 00:30:42.505 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:42.505 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:42.505 rmmod nvme_tcp 00:30:42.505 rmmod nvme_fabrics 00:30:42.505 rmmod nvme_keyring 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3072069 ']' 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3072069 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3072069 ']' 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3072069 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:42.505 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3072069 00:30:42.764 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:42.764 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:42.764 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3072069' 00:30:42.764 killing process with pid 3072069 00:30:42.764 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3072069 00:30:42.764 02:50:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3072069 00:30:44.139 02:50:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:44.139 02:50:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:44.139 02:50:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:44.139 02:50:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:30:44.139 02:50:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:30:44.139 02:50:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:44.139 02:50:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:30:44.139 02:50:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:44.139 02:50:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:44.139 02:50:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.139 02:50:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:44.139 02:50:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.043 02:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:46.043 00:30:46.043 real 0m7.609s 00:30:46.043 user 0m11.143s 00:30:46.043 sys 0m2.226s 00:30:46.043 02:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:46.043 02:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:46.043 ************************************ 00:30:46.043 END TEST nvmf_identify 00:30:46.043 ************************************ 00:30:46.043 02:50:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:46.043 02:50:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:46.043 02:50:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:46.043 02:50:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.043 ************************************ 00:30:46.044 START TEST nvmf_perf 00:30:46.044 ************************************ 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:46.044 * Looking for test storage... 00:30:46.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:46.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.044 --rc genhtml_branch_coverage=1 00:30:46.044 --rc genhtml_function_coverage=1 00:30:46.044 --rc genhtml_legend=1 00:30:46.044 --rc geninfo_all_blocks=1 00:30:46.044 --rc geninfo_unexecuted_blocks=1 00:30:46.044 00:30:46.044 ' 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:46.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.044 --rc genhtml_branch_coverage=1 00:30:46.044 --rc genhtml_function_coverage=1 00:30:46.044 --rc genhtml_legend=1 00:30:46.044 --rc geninfo_all_blocks=1 00:30:46.044 --rc geninfo_unexecuted_blocks=1 00:30:46.044 00:30:46.044 ' 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:46.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.044 --rc genhtml_branch_coverage=1 00:30:46.044 --rc genhtml_function_coverage=1 00:30:46.044 --rc genhtml_legend=1 00:30:46.044 --rc geninfo_all_blocks=1 00:30:46.044 --rc geninfo_unexecuted_blocks=1 00:30:46.044 00:30:46.044 ' 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:46.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.044 --rc genhtml_branch_coverage=1 00:30:46.044 --rc genhtml_function_coverage=1 00:30:46.044 --rc genhtml_legend=1 00:30:46.044 --rc geninfo_all_blocks=1 00:30:46.044 --rc geninfo_unexecuted_blocks=1 00:30:46.044 00:30:46.044 ' 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:46.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:46.044 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:46.045 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:46.045 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:46.045 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:46.045 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:46.045 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:46.045 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:46.045 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:46.045 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.045 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:46.045 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.045 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:46.045 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:46.045 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:46.045 02:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:48.575 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:48.575 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:48.576 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:48.576 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:48.576 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:48.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:48.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:30:48.576 00:30:48.576 --- 10.0.0.2 ping statistics --- 00:30:48.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.576 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:48.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:48.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:30:48.576 00:30:48.576 --- 10.0.0.1 ping statistics --- 00:30:48.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.576 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3074422 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3074422 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3074422 ']' 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:48.576 02:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:48.576 [2024-11-17 02:50:56.740118] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:48.576 [2024-11-17 02:50:56.740265] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:48.576 [2024-11-17 02:50:56.897147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:48.834 [2024-11-17 02:50:57.039086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:48.834 [2024-11-17 02:50:57.039174] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:48.834 [2024-11-17 02:50:57.039201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:48.834 [2024-11-17 02:50:57.039225] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:48.834 [2024-11-17 02:50:57.039245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:48.834 [2024-11-17 02:50:57.042042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.834 [2024-11-17 02:50:57.042121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:48.834 [2024-11-17 02:50:57.042202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.834 [2024-11-17 02:50:57.042206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:49.400 02:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:49.400 02:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:30:49.400 02:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:49.400 02:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:49.400 02:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:49.400 02:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.400 02:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:49.400 02:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:52.678 02:51:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:52.678 02:51:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:52.936 02:51:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:30:52.936 02:51:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:53.195 02:51:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:53.195 02:51:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:30:53.195 02:51:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:53.195 02:51:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:53.195 02:51:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:53.452 [2024-11-17 02:51:01.786986] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:53.452 02:51:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:53.709 02:51:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:53.709 02:51:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:53.967 02:51:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:53.967 02:51:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:54.225 02:51:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:54.483 [2024-11-17 02:51:02.913636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:54.483 02:51:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:55.049 02:51:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:30:55.049 02:51:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:55.049 02:51:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:55.049 02:51:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:56.423 Initializing NVMe Controllers 00:30:56.423 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:30:56.423 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:30:56.423 Initialization complete. Launching workers. 00:30:56.423 ======================================================== 00:30:56.423 Latency(us) 00:30:56.423 Device Information : IOPS MiB/s Average min max 00:30:56.423 PCIE (0000:88:00.0) NSID 1 from core 0: 74555.53 291.23 428.55 49.01 4405.16 00:30:56.423 ======================================================== 00:30:56.423 Total : 74555.53 291.23 428.55 49.01 4405.16 00:30:56.423 00:30:56.423 02:51:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:57.796 Initializing NVMe Controllers 00:30:57.796 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:57.796 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:57.796 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:57.796 Initialization complete. Launching workers. 00:30:57.796 ======================================================== 00:30:57.796 Latency(us) 00:30:57.796 Device Information : IOPS MiB/s Average min max 00:30:57.796 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 92.00 0.36 11265.06 192.11 44971.53 00:30:57.796 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 52.00 0.20 19469.86 4164.21 51836.13 00:30:57.796 ======================================================== 00:30:57.796 Total : 144.00 0.56 14227.90 192.11 51836.13 00:30:57.796 00:30:57.796 02:51:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:59.169 Initializing NVMe Controllers 00:30:59.169 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:59.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:59.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:59.170 Initialization complete. Launching workers. 00:30:59.170 ======================================================== 00:30:59.170 Latency(us) 00:30:59.170 Device Information : IOPS MiB/s Average min max 00:30:59.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5624.12 21.97 5709.07 892.96 12291.59 00:30:59.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3740.77 14.61 8582.60 5244.58 20004.07 00:30:59.170 ======================================================== 00:30:59.170 Total : 9364.90 36.58 6856.89 892.96 20004.07 00:30:59.170 00:30:59.170 02:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:59.170 02:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:59.170 02:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:02.451 Initializing NVMe Controllers 00:31:02.451 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:02.451 Controller IO queue size 128, less than required. 00:31:02.451 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:02.451 Controller IO queue size 128, less than required. 00:31:02.451 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:02.451 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:02.451 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:02.451 Initialization complete. Launching workers. 00:31:02.451 ======================================================== 00:31:02.451 Latency(us) 00:31:02.451 Device Information : IOPS MiB/s Average min max 00:31:02.451 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1332.32 333.08 101219.86 72263.47 342043.67 00:31:02.451 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 538.52 134.63 250108.57 136842.71 474002.27 00:31:02.451 ======================================================== 00:31:02.451 Total : 1870.85 467.71 144077.55 72263.47 474002.27 00:31:02.451 00:31:02.451 02:51:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:02.451 No valid NVMe controllers or AIO or URING devices found 00:31:02.451 Initializing NVMe Controllers 00:31:02.451 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:02.451 Controller IO queue size 128, less than required. 00:31:02.452 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:02.452 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:02.452 Controller IO queue size 128, less than required. 00:31:02.452 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:02.452 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:02.452 WARNING: Some requested NVMe devices were skipped 00:31:02.452 02:51:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:05.732 Initializing NVMe Controllers 00:31:05.732 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:05.732 Controller IO queue size 128, less than required. 00:31:05.732 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:05.732 Controller IO queue size 128, less than required. 00:31:05.732 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:05.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:05.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:05.732 Initialization complete. Launching workers. 00:31:05.732 00:31:05.732 ==================== 00:31:05.732 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:05.732 TCP transport: 00:31:05.732 polls: 5693 00:31:05.732 idle_polls: 3144 00:31:05.732 sock_completions: 2549 00:31:05.732 nvme_completions: 4853 00:31:05.732 submitted_requests: 7192 00:31:05.732 queued_requests: 1 00:31:05.732 00:31:05.732 ==================== 00:31:05.732 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:05.732 TCP transport: 00:31:05.732 polls: 6313 00:31:05.732 idle_polls: 3837 00:31:05.732 sock_completions: 2476 00:31:05.732 nvme_completions: 4969 00:31:05.732 submitted_requests: 7498 00:31:05.732 queued_requests: 1 00:31:05.732 ======================================================== 00:31:05.733 Latency(us) 00:31:05.733 Device Information : IOPS MiB/s Average min max 00:31:05.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1211.56 302.89 114469.26 73580.68 423346.72 00:31:05.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1240.52 310.13 104167.49 55997.28 304474.89 00:31:05.733 ======================================================== 00:31:05.733 Total : 2452.08 613.02 109257.53 55997.28 423346.72 00:31:05.733 00:31:05.733 02:51:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:05.733 02:51:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:05.733 02:51:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:05.733 02:51:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:31:05.733 02:51:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:09.013 02:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=7efb8f35-3bcc-4409-aab3-c22b88ef4930 00:31:09.013 02:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 7efb8f35-3bcc-4409-aab3-c22b88ef4930 00:31:09.013 02:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=7efb8f35-3bcc-4409-aab3-c22b88ef4930 00:31:09.013 02:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:09.013 02:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:09.013 02:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:09.013 02:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:09.271 02:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:09.271 { 00:31:09.271 "uuid": "7efb8f35-3bcc-4409-aab3-c22b88ef4930", 00:31:09.271 "name": "lvs_0", 00:31:09.271 "base_bdev": "Nvme0n1", 00:31:09.271 "total_data_clusters": 238234, 00:31:09.271 "free_clusters": 238234, 00:31:09.271 "block_size": 512, 00:31:09.271 "cluster_size": 4194304 00:31:09.271 } 00:31:09.271 ]' 00:31:09.271 02:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="7efb8f35-3bcc-4409-aab3-c22b88ef4930") .free_clusters' 00:31:09.271 02:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:31:09.271 02:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="7efb8f35-3bcc-4409-aab3-c22b88ef4930") .cluster_size' 00:31:09.271 02:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:09.271 02:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:31:09.271 02:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:31:09.271 952936 00:31:09.271 02:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:31:09.271 02:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:09.271 02:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7efb8f35-3bcc-4409-aab3-c22b88ef4930 lbd_0 20480 00:31:09.836 02:51:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=51244659-1c9f-4c5d-b318-2631b8ba5f2b 00:31:09.836 02:51:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 51244659-1c9f-4c5d-b318-2631b8ba5f2b lvs_n_0 00:31:10.769 02:51:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=94a40708-535d-4a04-b71d-d197e0a835b3 00:31:10.769 02:51:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 94a40708-535d-4a04-b71d-d197e0a835b3 00:31:10.769 02:51:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=94a40708-535d-4a04-b71d-d197e0a835b3 00:31:10.769 02:51:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:10.769 02:51:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:10.769 02:51:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:10.769 02:51:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:11.026 02:51:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:11.026 { 00:31:11.026 "uuid": "7efb8f35-3bcc-4409-aab3-c22b88ef4930", 00:31:11.026 "name": "lvs_0", 00:31:11.026 "base_bdev": "Nvme0n1", 00:31:11.026 "total_data_clusters": 238234, 00:31:11.026 "free_clusters": 233114, 00:31:11.026 "block_size": 512, 00:31:11.026 "cluster_size": 4194304 00:31:11.026 }, 00:31:11.026 { 00:31:11.026 "uuid": "94a40708-535d-4a04-b71d-d197e0a835b3", 00:31:11.026 "name": "lvs_n_0", 00:31:11.026 "base_bdev": "51244659-1c9f-4c5d-b318-2631b8ba5f2b", 00:31:11.026 "total_data_clusters": 5114, 00:31:11.026 "free_clusters": 5114, 00:31:11.026 "block_size": 512, 00:31:11.026 "cluster_size": 4194304 00:31:11.026 } 00:31:11.026 ]' 00:31:11.026 02:51:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="94a40708-535d-4a04-b71d-d197e0a835b3") .free_clusters' 00:31:11.026 02:51:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:31:11.026 02:51:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="94a40708-535d-4a04-b71d-d197e0a835b3") .cluster_size' 00:31:11.026 02:51:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:11.026 02:51:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:31:11.026 02:51:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:31:11.026 20456 00:31:11.026 02:51:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:11.026 02:51:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 94a40708-535d-4a04-b71d-d197e0a835b3 lbd_nest_0 20456 00:31:11.283 02:51:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=0b1577f2-516b-44fb-b3e2-69d7f7976e22 00:31:11.283 02:51:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:11.541 02:51:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:11.541 02:51:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 0b1577f2-516b-44fb-b3e2-69d7f7976e22 00:31:11.798 02:51:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:12.056 02:51:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:12.056 02:51:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:12.056 02:51:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:12.056 02:51:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:12.056 02:51:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:24.245 Initializing NVMe Controllers 00:31:24.245 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:24.245 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:24.245 Initialization complete. Launching workers. 00:31:24.245 ======================================================== 00:31:24.245 Latency(us) 00:31:24.245 Device Information : IOPS MiB/s Average min max 00:31:24.245 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.09 0.02 20405.66 234.21 45771.58 00:31:24.245 ======================================================== 00:31:24.245 Total : 49.09 0.02 20405.66 234.21 45771.58 00:31:24.245 00:31:24.245 02:51:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:24.245 02:51:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:34.263 Initializing NVMe Controllers 00:31:34.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:34.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:34.263 Initialization complete. Launching workers. 00:31:34.263 ======================================================== 00:31:34.263 Latency(us) 00:31:34.263 Device Information : IOPS MiB/s Average min max 00:31:34.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.27 10.28 12153.99 5985.07 47888.62 00:31:34.263 ======================================================== 00:31:34.263 Total : 82.27 10.28 12153.99 5985.07 47888.62 00:31:34.263 00:31:34.263 02:51:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:34.263 02:51:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:34.263 02:51:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:44.239 Initializing NVMe Controllers 00:31:44.239 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:44.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:44.239 Initialization complete. Launching workers. 00:31:44.239 ======================================================== 00:31:44.239 Latency(us) 00:31:44.239 Device Information : IOPS MiB/s Average min max 00:31:44.239 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4758.10 2.32 6726.66 658.18 16201.71 00:31:44.239 ======================================================== 00:31:44.239 Total : 4758.10 2.32 6726.66 658.18 16201.71 00:31:44.239 00:31:44.239 02:51:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:44.239 02:51:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:54.202 Initializing NVMe Controllers 00:31:54.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:54.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:54.202 Initialization complete. Launching workers. 00:31:54.202 ======================================================== 00:31:54.202 Latency(us) 00:31:54.202 Device Information : IOPS MiB/s Average min max 00:31:54.202 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3606.78 450.85 8875.39 1576.79 20048.05 00:31:54.202 ======================================================== 00:31:54.202 Total : 3606.78 450.85 8875.39 1576.79 20048.05 00:31:54.202 00:31:54.202 02:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:54.202 02:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:54.202 02:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:06.400 Initializing NVMe Controllers 00:32:06.400 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:06.400 Controller IO queue size 128, less than required. 00:32:06.400 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:06.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:06.400 Initialization complete. Launching workers. 00:32:06.400 ======================================================== 00:32:06.400 Latency(us) 00:32:06.400 Device Information : IOPS MiB/s Average min max 00:32:06.400 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8444.30 4.12 15166.57 1908.43 54654.34 00:32:06.400 ======================================================== 00:32:06.400 Total : 8444.30 4.12 15166.57 1908.43 54654.34 00:32:06.400 00:32:06.400 02:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:06.400 02:52:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:16.368 Initializing NVMe Controllers 00:32:16.368 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:16.368 Controller IO queue size 128, less than required. 00:32:16.368 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:16.368 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:16.368 Initialization complete. Launching workers. 00:32:16.368 ======================================================== 00:32:16.368 Latency(us) 00:32:16.368 Device Information : IOPS MiB/s Average min max 00:32:16.368 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1144.92 143.12 112158.24 15921.30 239792.23 00:32:16.368 ======================================================== 00:32:16.368 Total : 1144.92 143.12 112158.24 15921.30 239792.23 00:32:16.368 00:32:16.368 02:52:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:16.368 02:52:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0b1577f2-516b-44fb-b3e2-69d7f7976e22 00:32:16.368 02:52:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:16.625 02:52:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 51244659-1c9f-4c5d-b318-2631b8ba5f2b 00:32:16.883 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:17.448 rmmod nvme_tcp 00:32:17.448 rmmod nvme_fabrics 00:32:17.448 rmmod nvme_keyring 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3074422 ']' 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3074422 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3074422 ']' 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3074422 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3074422 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3074422' 00:32:17.448 killing process with pid 3074422 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3074422 00:32:17.448 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3074422 00:32:19.977 02:52:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:19.977 02:52:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:19.977 02:52:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:19.977 02:52:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:32:19.977 02:52:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:32:19.977 02:52:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:32:19.977 02:52:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:19.977 02:52:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:19.977 02:52:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:19.977 02:52:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.977 02:52:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:19.977 02:52:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:21.883 00:32:21.883 real 1m35.850s 00:32:21.883 user 5m55.638s 00:32:21.883 sys 0m15.192s 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:21.883 ************************************ 00:32:21.883 END TEST nvmf_perf 00:32:21.883 ************************************ 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.883 ************************************ 00:32:21.883 START TEST nvmf_fio_host 00:32:21.883 ************************************ 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:21.883 * Looking for test storage... 00:32:21.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:32:21.883 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:21.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.884 --rc genhtml_branch_coverage=1 00:32:21.884 --rc genhtml_function_coverage=1 00:32:21.884 --rc genhtml_legend=1 00:32:21.884 --rc geninfo_all_blocks=1 00:32:21.884 --rc geninfo_unexecuted_blocks=1 00:32:21.884 00:32:21.884 ' 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:21.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.884 --rc genhtml_branch_coverage=1 00:32:21.884 --rc genhtml_function_coverage=1 00:32:21.884 --rc genhtml_legend=1 00:32:21.884 --rc geninfo_all_blocks=1 00:32:21.884 --rc geninfo_unexecuted_blocks=1 00:32:21.884 00:32:21.884 ' 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:21.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.884 --rc genhtml_branch_coverage=1 00:32:21.884 --rc genhtml_function_coverage=1 00:32:21.884 --rc genhtml_legend=1 00:32:21.884 --rc geninfo_all_blocks=1 00:32:21.884 --rc geninfo_unexecuted_blocks=1 00:32:21.884 00:32:21.884 ' 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:21.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.884 --rc genhtml_branch_coverage=1 00:32:21.884 --rc genhtml_function_coverage=1 00:32:21.884 --rc genhtml_legend=1 00:32:21.884 --rc geninfo_all_blocks=1 00:32:21.884 --rc geninfo_unexecuted_blocks=1 00:32:21.884 00:32:21.884 ' 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.884 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:22.142 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:22.142 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:22.142 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:22.142 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:22.142 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:22.142 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:22.142 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:22.142 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:22.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:22.143 02:52:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:24.045 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:24.045 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:24.045 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:24.305 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:24.305 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:24.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:24.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:32:24.305 00:32:24.305 --- 10.0.0.2 ping statistics --- 00:32:24.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:24.305 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:24.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:24.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:32:24.305 00:32:24.305 --- 10.0.0.1 ping statistics --- 00:32:24.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:24.305 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.305 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3087649 00:32:24.306 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:24.306 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:24.306 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3087649 00:32:24.306 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3087649 ']' 00:32:24.306 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:24.306 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:24.306 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:24.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:24.306 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:24.306 02:52:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.564 [2024-11-17 02:52:32.767263] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:32:24.565 [2024-11-17 02:52:32.767413] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:24.565 [2024-11-17 02:52:32.924761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:24.823 [2024-11-17 02:52:33.068003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:24.823 [2024-11-17 02:52:33.068103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:24.823 [2024-11-17 02:52:33.068132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:24.823 [2024-11-17 02:52:33.068158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:24.823 [2024-11-17 02:52:33.068179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:24.823 [2024-11-17 02:52:33.071055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:24.823 [2024-11-17 02:52:33.071140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:24.823 [2024-11-17 02:52:33.071226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.823 [2024-11-17 02:52:33.071231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:25.388 02:52:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:25.388 02:52:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:32:25.388 02:52:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:25.646 [2024-11-17 02:52:34.025291] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:25.646 02:52:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:25.646 02:52:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:25.646 02:52:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.646 02:52:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:26.212 Malloc1 00:32:26.212 02:52:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:26.470 02:52:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:26.728 02:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:26.986 [2024-11-17 02:52:35.325071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:26.986 02:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:27.244 02:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:27.244 02:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:27.244 02:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:27.244 02:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:27.244 02:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:27.244 02:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:27.244 02:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:27.244 02:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:27.244 02:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:27.244 02:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:27.244 02:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:27.244 02:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:27.244 02:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:27.244 02:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:27.244 02:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:27.244 02:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:27.244 02:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:27.244 02:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:27.501 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:27.501 fio-3.35 00:32:27.501 Starting 1 thread 00:32:30.027 00:32:30.027 test: (groupid=0, jobs=1): err= 0: pid=3088137: Sun Nov 17 02:52:38 2024 00:32:30.027 read: IOPS=6292, BW=24.6MiB/s (25.8MB/s)(49.4MiB/2010msec) 00:32:30.027 slat (usec): min=3, max=126, avg= 3.82, stdev= 1.95 00:32:30.027 clat (usec): min=3302, max=19340, avg=11059.70, stdev=971.58 00:32:30.027 lat (usec): min=3347, max=19343, avg=11063.52, stdev=971.49 00:32:30.027 clat percentiles (usec): 00:32:30.027 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:32:30.027 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:32:30.027 | 70.00th=[11469], 80.00th=[11863], 90.00th=[12256], 95.00th=[12518], 00:32:30.027 | 99.00th=[13173], 99.50th=[13435], 99.90th=[16909], 99.95th=[19006], 00:32:30.027 | 99.99th=[19268] 00:32:30.027 bw ( KiB/s): min=24360, max=25752, per=100.00%, avg=25172.00, stdev=629.43, samples=4 00:32:30.028 iops : min= 6090, max= 6438, avg=6293.00, stdev=157.36, samples=4 00:32:30.028 write: IOPS=6286, BW=24.6MiB/s (25.7MB/s)(49.4MiB/2010msec); 0 zone resets 00:32:30.028 slat (usec): min=3, max=101, avg= 3.91, stdev= 1.53 00:32:30.028 clat (usec): min=1227, max=18092, avg=9207.21, stdev=809.31 00:32:30.028 lat (usec): min=1239, max=18096, avg=9211.12, stdev=809.33 00:32:30.028 clat percentiles (usec): 00:32:30.028 | 1.00th=[ 7504], 5.00th=[ 8029], 10.00th=[ 8291], 20.00th=[ 8586], 00:32:30.028 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9372], 00:32:30.028 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10421], 00:32:30.028 | 99.00th=[10945], 99.50th=[11338], 99.90th=[14615], 99.95th=[16909], 00:32:30.028 | 99.99th=[17957] 00:32:30.028 bw ( KiB/s): min=24832, max=25456, per=99.98%, avg=25138.00, stdev=331.68, samples=4 00:32:30.028 iops : min= 6208, max= 6364, avg=6284.50, stdev=82.92, samples=4 00:32:30.028 lat (msec) : 2=0.01%, 4=0.11%, 10=49.20%, 20=50.69% 00:32:30.028 cpu : usr=67.40%, sys=31.11%, ctx=69, majf=0, minf=1547 00:32:30.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:30.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:30.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:30.028 issued rwts: total=12648,12635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:30.028 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:30.028 00:32:30.028 Run status group 0 (all jobs): 00:32:30.028 READ: bw=24.6MiB/s (25.8MB/s), 24.6MiB/s-24.6MiB/s (25.8MB/s-25.8MB/s), io=49.4MiB (51.8MB), run=2010-2010msec 00:32:30.028 WRITE: bw=24.6MiB/s (25.7MB/s), 24.6MiB/s-24.6MiB/s (25.7MB/s-25.7MB/s), io=49.4MiB (51.8MB), run=2010-2010msec 00:32:30.594 ----------------------------------------------------- 00:32:30.594 Suppressions used: 00:32:30.594 count bytes template 00:32:30.594 1 57 /usr/src/fio/parse.c 00:32:30.594 1 8 libtcmalloc_minimal.so 00:32:30.594 ----------------------------------------------------- 00:32:30.594 00:32:30.594 02:52:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:30.594 02:52:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:30.594 02:52:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:30.594 02:52:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:30.594 02:52:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:30.594 02:52:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:30.594 02:52:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:30.594 02:52:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:30.594 02:52:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:30.594 02:52:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:30.594 02:52:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:30.594 02:52:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:30.594 02:52:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:30.594 02:52:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:30.594 02:52:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:30.594 02:52:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:30.594 02:52:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:30.852 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:30.852 fio-3.35 00:32:30.852 Starting 1 thread 00:32:33.412 00:32:33.412 test: (groupid=0, jobs=1): err= 0: pid=3088584: Sun Nov 17 02:52:41 2024 00:32:33.412 read: IOPS=6032, BW=94.3MiB/s (98.8MB/s)(189MiB/2010msec) 00:32:33.412 slat (usec): min=4, max=133, avg= 5.81, stdev= 2.30 00:32:33.412 clat (usec): min=2521, max=24579, avg=12196.85, stdev=2865.37 00:32:33.412 lat (usec): min=2527, max=24584, avg=12202.67, stdev=2865.46 00:32:33.412 clat percentiles (usec): 00:32:33.412 | 1.00th=[ 6587], 5.00th=[ 7832], 10.00th=[ 8586], 20.00th=[ 9765], 00:32:33.412 | 30.00th=[10552], 40.00th=[11469], 50.00th=[11994], 60.00th=[12780], 00:32:33.412 | 70.00th=[13435], 80.00th=[14353], 90.00th=[15926], 95.00th=[17171], 00:32:33.412 | 99.00th=[19792], 99.50th=[20841], 99.90th=[23987], 99.95th=[24249], 00:32:33.412 | 99.99th=[24511] 00:32:33.412 bw ( KiB/s): min=41824, max=55168, per=49.61%, avg=47880.00, stdev=6935.34, samples=4 00:32:33.412 iops : min= 2614, max= 3448, avg=2992.50, stdev=433.46, samples=4 00:32:33.412 write: IOPS=3452, BW=53.9MiB/s (56.6MB/s)(97.7MiB/1811msec); 0 zone resets 00:32:33.412 slat (usec): min=32, max=217, avg=40.27, stdev= 6.87 00:32:33.412 clat (usec): min=5745, max=28680, avg=15998.00, stdev=2802.20 00:32:33.412 lat (usec): min=5783, max=28720, avg=16038.28, stdev=2802.38 00:32:33.412 clat percentiles (usec): 00:32:33.412 | 1.00th=[10814], 5.00th=[11863], 10.00th=[12649], 20.00th=[13698], 00:32:33.412 | 30.00th=[14484], 40.00th=[15139], 50.00th=[15795], 60.00th=[16581], 00:32:33.412 | 70.00th=[17171], 80.00th=[18220], 90.00th=[19530], 95.00th=[20841], 00:32:33.412 | 99.00th=[23725], 99.50th=[25560], 99.90th=[28181], 99.95th=[28443], 00:32:33.412 | 99.99th=[28705] 00:32:33.412 bw ( KiB/s): min=41664, max=58208, per=89.35%, avg=49360.00, stdev=8146.98, samples=4 00:32:33.412 iops : min= 2604, max= 3638, avg=3085.00, stdev=509.19, samples=4 00:32:33.412 lat (msec) : 4=0.07%, 10=14.62%, 20=81.88%, 50=3.44% 00:32:33.412 cpu : usr=83.63%, sys=15.37%, ctx=32, majf=0, minf=2105 00:32:33.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:32:33.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:33.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:33.412 issued rwts: total=12125,6253,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:33.412 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:33.412 00:32:33.412 Run status group 0 (all jobs): 00:32:33.412 READ: bw=94.3MiB/s (98.8MB/s), 94.3MiB/s-94.3MiB/s (98.8MB/s-98.8MB/s), io=189MiB (199MB), run=2010-2010msec 00:32:33.412 WRITE: bw=53.9MiB/s (56.6MB/s), 53.9MiB/s-53.9MiB/s (56.6MB/s-56.6MB/s), io=97.7MiB (102MB), run=1811-1811msec 00:32:33.669 ----------------------------------------------------- 00:32:33.669 Suppressions used: 00:32:33.669 count bytes template 00:32:33.669 1 57 /usr/src/fio/parse.c 00:32:33.669 157 15072 /usr/src/fio/iolog.c 00:32:33.669 1 8 libtcmalloc_minimal.so 00:32:33.669 ----------------------------------------------------- 00:32:33.669 00:32:33.669 02:52:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:33.927 02:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:33.927 02:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:33.927 02:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:33.927 02:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:33.927 02:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:32:33.927 02:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:33.927 02:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:33.927 02:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:33.927 02:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:33.927 02:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:32:33.927 02:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:32:37.206 Nvme0n1 00:32:37.206 02:52:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:40.484 02:52:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=cb19334f-4d1e-46f1-9cdd-7f90fdd37724 00:32:40.484 02:52:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb cb19334f-4d1e-46f1-9cdd-7f90fdd37724 00:32:40.484 02:52:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=cb19334f-4d1e-46f1-9cdd-7f90fdd37724 00:32:40.484 02:52:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:40.484 02:52:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:40.484 02:52:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:40.484 02:52:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:40.484 02:52:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:40.484 { 00:32:40.484 "uuid": "cb19334f-4d1e-46f1-9cdd-7f90fdd37724", 00:32:40.484 "name": "lvs_0", 00:32:40.484 "base_bdev": "Nvme0n1", 00:32:40.484 "total_data_clusters": 930, 00:32:40.484 "free_clusters": 930, 00:32:40.484 "block_size": 512, 00:32:40.484 "cluster_size": 1073741824 00:32:40.484 } 00:32:40.484 ]' 00:32:40.484 02:52:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="cb19334f-4d1e-46f1-9cdd-7f90fdd37724") .free_clusters' 00:32:40.484 02:52:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:32:40.484 02:52:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="cb19334f-4d1e-46f1-9cdd-7f90fdd37724") .cluster_size' 00:32:40.484 02:52:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:32:40.484 02:52:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:32:40.484 02:52:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:32:40.484 952320 00:32:40.484 02:52:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:40.742 700d3678-5ce6-4f36-9833-8e4f964b2064 00:32:40.742 02:52:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:40.998 02:52:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:41.563 02:52:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:41.563 02:52:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:41.563 02:52:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:41.563 02:52:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:41.563 02:52:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:41.563 02:52:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:41.563 02:52:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:41.563 02:52:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:41.563 02:52:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:41.563 02:52:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:41.563 02:52:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:41.563 02:52:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:41.563 02:52:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:41.563 02:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:41.563 02:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:41.563 02:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:41.563 02:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:41.563 02:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:41.821 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:41.821 fio-3.35 00:32:41.821 Starting 1 thread 00:32:45.102 00:32:45.102 test: (groupid=0, jobs=1): err= 0: pid=3089982: Sun Nov 17 02:52:52 2024 00:32:45.102 read: IOPS=4131, BW=16.1MiB/s (16.9MB/s)(33.1MiB/2050msec) 00:32:45.102 slat (usec): min=3, max=149, avg= 3.99, stdev= 2.84 00:32:45.102 clat (usec): min=1471, max=173142, avg=16703.25, stdev=13930.67 00:32:45.102 lat (usec): min=1475, max=173205, avg=16707.23, stdev=13931.06 00:32:45.102 clat percentiles (msec): 00:32:45.102 | 1.00th=[ 12], 5.00th=[ 14], 10.00th=[ 14], 20.00th=[ 15], 00:32:45.102 | 30.00th=[ 15], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 16], 00:32:45.102 | 70.00th=[ 16], 80.00th=[ 17], 90.00th=[ 17], 95.00th=[ 18], 00:32:45.102 | 99.00th=[ 64], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 174], 00:32:45.102 | 99.99th=[ 174] 00:32:45.102 bw ( KiB/s): min=11984, max=18568, per=100.00%, avg=16804.00, stdev=3217.06, samples=4 00:32:45.102 iops : min= 2996, max= 4642, avg=4201.00, stdev=804.27, samples=4 00:32:45.102 write: IOPS=4144, BW=16.2MiB/s (17.0MB/s)(33.2MiB/2050msec); 0 zone resets 00:32:45.102 slat (usec): min=3, max=125, avg= 4.15, stdev= 2.31 00:32:45.102 clat (usec): min=384, max=170509, avg=13977.32, stdev=13094.03 00:32:45.102 lat (usec): min=389, max=170516, avg=13981.47, stdev=13094.41 00:32:45.102 clat percentiles (msec): 00:32:45.102 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 12], 00:32:45.102 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 13], 60.00th=[ 14], 00:32:45.102 | 70.00th=[ 14], 80.00th=[ 14], 90.00th=[ 15], 95.00th=[ 15], 00:32:45.102 | 99.00th=[ 57], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:32:45.102 | 99.99th=[ 171] 00:32:45.102 bw ( KiB/s): min=12712, max=18432, per=100.00%, avg=16874.00, stdev=2777.29, samples=4 00:32:45.102 iops : min= 3178, max= 4608, avg=4218.50, stdev=694.32, samples=4 00:32:45.102 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:32:45.102 lat (msec) : 2=0.02%, 4=0.05%, 10=1.02%, 20=97.38%, 50=0.01% 00:32:45.102 lat (msec) : 100=0.74%, 250=0.75% 00:32:45.102 cpu : usr=60.91%, sys=37.77%, ctx=64, majf=0, minf=1544 00:32:45.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:45.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:45.102 issued rwts: total=8470,8496,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:45.102 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:45.102 00:32:45.102 Run status group 0 (all jobs): 00:32:45.102 READ: bw=16.1MiB/s (16.9MB/s), 16.1MiB/s-16.1MiB/s (16.9MB/s-16.9MB/s), io=33.1MiB (34.7MB), run=2050-2050msec 00:32:45.102 WRITE: bw=16.2MiB/s (17.0MB/s), 16.2MiB/s-16.2MiB/s (17.0MB/s-17.0MB/s), io=33.2MiB (34.8MB), run=2050-2050msec 00:32:45.102 ----------------------------------------------------- 00:32:45.102 Suppressions used: 00:32:45.102 count bytes template 00:32:45.102 1 58 /usr/src/fio/parse.c 00:32:45.102 1 8 libtcmalloc_minimal.so 00:32:45.102 ----------------------------------------------------- 00:32:45.102 00:32:45.102 02:52:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:45.102 02:52:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:46.474 02:52:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=d0edef90-bf56-4d3a-9910-c004a6a2c423 00:32:46.474 02:52:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb d0edef90-bf56-4d3a-9910-c004a6a2c423 00:32:46.474 02:52:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=d0edef90-bf56-4d3a-9910-c004a6a2c423 00:32:46.474 02:52:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:46.474 02:52:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:46.474 02:52:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:46.474 02:52:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:46.474 02:52:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:46.474 { 00:32:46.474 "uuid": "cb19334f-4d1e-46f1-9cdd-7f90fdd37724", 00:32:46.474 "name": "lvs_0", 00:32:46.474 "base_bdev": "Nvme0n1", 00:32:46.474 "total_data_clusters": 930, 00:32:46.474 "free_clusters": 0, 00:32:46.474 "block_size": 512, 00:32:46.474 "cluster_size": 1073741824 00:32:46.474 }, 00:32:46.474 { 00:32:46.474 "uuid": "d0edef90-bf56-4d3a-9910-c004a6a2c423", 00:32:46.474 "name": "lvs_n_0", 00:32:46.474 "base_bdev": "700d3678-5ce6-4f36-9833-8e4f964b2064", 00:32:46.474 "total_data_clusters": 237847, 00:32:46.474 "free_clusters": 237847, 00:32:46.474 "block_size": 512, 00:32:46.474 "cluster_size": 4194304 00:32:46.474 } 00:32:46.474 ]' 00:32:46.474 02:52:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="d0edef90-bf56-4d3a-9910-c004a6a2c423") .free_clusters' 00:32:46.474 02:52:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:32:46.474 02:52:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="d0edef90-bf56-4d3a-9910-c004a6a2c423") .cluster_size' 00:32:46.474 02:52:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:32:46.474 02:52:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:32:46.474 02:52:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:32:46.474 951388 00:32:46.474 02:52:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:47.845 4b78478e-efdb-4a67-9292-0b21b9345b12 00:32:47.845 02:52:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:48.102 02:52:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:48.360 02:52:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:48.618 02:52:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:48.618 02:52:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:48.618 02:52:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:48.618 02:52:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:48.618 02:52:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:48.618 02:52:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:48.618 02:52:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:48.618 02:52:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:48.618 02:52:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:48.618 02:52:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:48.618 02:52:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:48.618 02:52:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:48.618 02:52:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:48.618 02:52:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:48.618 02:52:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:48.618 02:52:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:48.618 02:52:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:48.876 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:48.876 fio-3.35 00:32:48.876 Starting 1 thread 00:32:51.399 00:32:51.399 test: (groupid=0, jobs=1): err= 0: pid=3090841: Sun Nov 17 02:52:59 2024 00:32:51.399 read: IOPS=4380, BW=17.1MiB/s (17.9MB/s)(34.4MiB/2012msec) 00:32:51.399 slat (usec): min=3, max=206, avg= 3.75, stdev= 3.39 00:32:51.399 clat (usec): min=6108, max=26878, avg=15812.98, stdev=1529.75 00:32:51.399 lat (usec): min=6117, max=26882, avg=15816.72, stdev=1529.63 00:32:51.399 clat percentiles (usec): 00:32:51.399 | 1.00th=[12387], 5.00th=[13566], 10.00th=[13960], 20.00th=[14615], 00:32:51.399 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15795], 60.00th=[16188], 00:32:51.399 | 70.00th=[16581], 80.00th=[16909], 90.00th=[17695], 95.00th=[18220], 00:32:51.399 | 99.00th=[19268], 99.50th=[19530], 99.90th=[26346], 99.95th=[26608], 00:32:51.399 | 99.99th=[26870] 00:32:51.399 bw ( KiB/s): min=16216, max=18072, per=99.82%, avg=17490.00, stdev=859.88, samples=4 00:32:51.399 iops : min= 4054, max= 4518, avg=4372.50, stdev=214.97, samples=4 00:32:51.399 write: IOPS=4380, BW=17.1MiB/s (17.9MB/s)(34.4MiB/2012msec); 0 zone resets 00:32:51.399 slat (usec): min=3, max=154, avg= 3.84, stdev= 2.25 00:32:51.399 clat (usec): min=2940, max=23302, avg=13096.84, stdev=1245.00 00:32:51.399 lat (usec): min=2950, max=23306, avg=13100.68, stdev=1244.95 00:32:51.399 clat percentiles (usec): 00:32:51.399 | 1.00th=[10159], 5.00th=[11207], 10.00th=[11731], 20.00th=[12256], 00:32:51.399 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13435], 00:32:51.399 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14484], 95.00th=[15008], 00:32:51.399 | 99.00th=[15795], 99.50th=[16450], 99.90th=[20841], 99.95th=[21103], 00:32:51.399 | 99.99th=[23200] 00:32:51.399 bw ( KiB/s): min=17192, max=17704, per=99.93%, avg=17508.00, stdev=221.75, samples=4 00:32:51.399 iops : min= 4298, max= 4426, avg=4377.00, stdev=55.44, samples=4 00:32:51.399 lat (msec) : 4=0.01%, 10=0.50%, 20=99.23%, 50=0.26% 00:32:51.399 cpu : usr=66.78%, sys=31.87%, ctx=86, majf=0, minf=1543 00:32:51.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:51.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:51.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:51.399 issued rwts: total=8813,8813,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:51.399 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:51.399 00:32:51.399 Run status group 0 (all jobs): 00:32:51.399 READ: bw=17.1MiB/s (17.9MB/s), 17.1MiB/s-17.1MiB/s (17.9MB/s-17.9MB/s), io=34.4MiB (36.1MB), run=2012-2012msec 00:32:51.399 WRITE: bw=17.1MiB/s (17.9MB/s), 17.1MiB/s-17.1MiB/s (17.9MB/s-17.9MB/s), io=34.4MiB (36.1MB), run=2012-2012msec 00:32:51.657 ----------------------------------------------------- 00:32:51.657 Suppressions used: 00:32:51.657 count bytes template 00:32:51.657 1 58 /usr/src/fio/parse.c 00:32:51.657 1 8 libtcmalloc_minimal.so 00:32:51.657 ----------------------------------------------------- 00:32:51.657 00:32:51.657 02:52:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:51.915 02:53:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:51.915 02:53:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:57.174 02:53:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:57.174 02:53:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:59.697 02:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:59.697 02:53:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:01.595 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:01.595 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:01.595 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:01.595 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:01.595 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:33:01.595 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:01.595 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:33:01.595 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:01.595 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:01.595 rmmod nvme_tcp 00:33:01.595 rmmod nvme_fabrics 00:33:01.595 rmmod nvme_keyring 00:33:01.852 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:01.852 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:33:01.852 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:33:01.852 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3087649 ']' 00:33:01.852 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3087649 00:33:01.852 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3087649 ']' 00:33:01.852 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3087649 00:33:01.853 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:33:01.853 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:01.853 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3087649 00:33:01.853 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:01.853 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:01.853 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3087649' 00:33:01.853 killing process with pid 3087649 00:33:01.853 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3087649 00:33:01.853 02:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3087649 00:33:03.227 02:53:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:03.227 02:53:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:03.227 02:53:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:03.227 02:53:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:33:03.227 02:53:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:33:03.227 02:53:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:03.228 02:53:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:33:03.228 02:53:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:03.228 02:53:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:03.228 02:53:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.228 02:53:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:03.228 02:53:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.129 02:53:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:05.129 00:33:05.129 real 0m43.251s 00:33:05.129 user 2m44.244s 00:33:05.129 sys 0m9.007s 00:33:05.129 02:53:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:05.129 02:53:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.129 ************************************ 00:33:05.129 END TEST nvmf_fio_host 00:33:05.129 ************************************ 00:33:05.129 02:53:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:05.129 02:53:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:05.129 02:53:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:05.129 02:53:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.129 ************************************ 00:33:05.129 START TEST nvmf_failover 00:33:05.129 ************************************ 00:33:05.129 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:05.129 * Looking for test storage... 00:33:05.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:05.129 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:05.129 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:33:05.129 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:05.388 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:05.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.389 --rc genhtml_branch_coverage=1 00:33:05.389 --rc genhtml_function_coverage=1 00:33:05.389 --rc genhtml_legend=1 00:33:05.389 --rc geninfo_all_blocks=1 00:33:05.389 --rc geninfo_unexecuted_blocks=1 00:33:05.389 00:33:05.389 ' 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:05.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.389 --rc genhtml_branch_coverage=1 00:33:05.389 --rc genhtml_function_coverage=1 00:33:05.389 --rc genhtml_legend=1 00:33:05.389 --rc geninfo_all_blocks=1 00:33:05.389 --rc geninfo_unexecuted_blocks=1 00:33:05.389 00:33:05.389 ' 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:05.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.389 --rc genhtml_branch_coverage=1 00:33:05.389 --rc genhtml_function_coverage=1 00:33:05.389 --rc genhtml_legend=1 00:33:05.389 --rc geninfo_all_blocks=1 00:33:05.389 --rc geninfo_unexecuted_blocks=1 00:33:05.389 00:33:05.389 ' 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:05.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.389 --rc genhtml_branch_coverage=1 00:33:05.389 --rc genhtml_function_coverage=1 00:33:05.389 --rc genhtml_legend=1 00:33:05.389 --rc geninfo_all_blocks=1 00:33:05.389 --rc geninfo_unexecuted_blocks=1 00:33:05.389 00:33:05.389 ' 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:05.389 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:05.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:05.390 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:05.390 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:05.390 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:05.390 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:05.390 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:05.390 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:05.390 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:05.390 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:05.390 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:05.390 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:05.390 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:05.390 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:05.390 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:05.390 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.390 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:05.390 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.390 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:05.390 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:05.390 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:33:05.390 02:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:07.290 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:07.290 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:07.290 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:07.290 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:07.290 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:07.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:07.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:33:07.549 00:33:07.549 --- 10.0.0.2 ping statistics --- 00:33:07.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.549 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:07.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:07.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:33:07.549 00:33:07.549 --- 10.0.0.1 ping statistics --- 00:33:07.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.549 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3094349 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3094349 00:33:07.549 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3094349 ']' 00:33:07.550 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.550 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:07.550 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.550 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:07.550 02:53:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:07.550 [2024-11-17 02:53:15.958012] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:33:07.550 [2024-11-17 02:53:15.958178] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:07.808 [2024-11-17 02:53:16.112935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:07.808 [2024-11-17 02:53:16.255346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:07.808 [2024-11-17 02:53:16.255414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:07.808 [2024-11-17 02:53:16.255440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:07.808 [2024-11-17 02:53:16.255471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:07.808 [2024-11-17 02:53:16.255491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:07.808 [2024-11-17 02:53:16.258109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:07.808 [2024-11-17 02:53:16.258201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:07.808 [2024-11-17 02:53:16.258205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:08.739 02:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:08.739 02:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:08.739 02:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:08.739 02:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:08.739 02:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:08.739 02:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:08.739 02:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:08.996 [2024-11-17 02:53:17.202250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:08.996 02:53:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:09.252 Malloc0 00:33:09.252 02:53:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:09.510 02:53:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:09.768 02:53:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:10.025 [2024-11-17 02:53:18.371323] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:10.025 02:53:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:10.282 [2024-11-17 02:53:18.632077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:10.282 02:53:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:10.541 [2024-11-17 02:53:18.897163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:10.541 02:53:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3094664 00:33:10.541 02:53:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:10.541 02:53:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:10.541 02:53:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3094664 /var/tmp/bdevperf.sock 00:33:10.541 02:53:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3094664 ']' 00:33:10.541 02:53:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:10.541 02:53:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.541 02:53:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:10.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:10.541 02:53:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.541 02:53:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:11.916 02:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:11.916 02:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:11.916 02:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:12.175 NVMe0n1 00:33:12.175 02:53:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:12.475 00:33:12.475 02:53:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3094915 00:33:12.475 02:53:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:12.475 02:53:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:13.457 02:53:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:13.715 [2024-11-17 02:53:22.134108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.715 [2024-11-17 02:53:22.134200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.715 [2024-11-17 02:53:22.134230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.715 [2024-11-17 02:53:22.134249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.715 [2024-11-17 02:53:22.134277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.715 [2024-11-17 02:53:22.134296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.715 [2024-11-17 02:53:22.134313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.134996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 [2024-11-17 02:53:22.135550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:13.716 02:53:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:16.996 02:53:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:17.254 00:33:17.254 02:53:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:17.513 [2024-11-17 02:53:25.838360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 [2024-11-17 02:53:25.838472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 [2024-11-17 02:53:25.838530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 [2024-11-17 02:53:25.838549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 [2024-11-17 02:53:25.838566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 [2024-11-17 02:53:25.838582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 [2024-11-17 02:53:25.838599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 [2024-11-17 02:53:25.838615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 [2024-11-17 02:53:25.838632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 [2024-11-17 02:53:25.838648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 [2024-11-17 02:53:25.838664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 [2024-11-17 02:53:25.838680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 [2024-11-17 02:53:25.838698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 [2024-11-17 02:53:25.838714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 [2024-11-17 02:53:25.838732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 [2024-11-17 02:53:25.838749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 [2024-11-17 02:53:25.838766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 [2024-11-17 02:53:25.838783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 [2024-11-17 02:53:25.838800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 [2024-11-17 02:53:25.838817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 [2024-11-17 02:53:25.838833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 [2024-11-17 02:53:25.838849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 [2024-11-17 02:53:25.838865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:17.513 02:53:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:20.795 02:53:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:20.795 [2024-11-17 02:53:29.164966] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:20.795 02:53:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:21.727 02:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:22.292 [2024-11-17 02:53:30.466223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.292 [2024-11-17 02:53:30.466308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.292 [2024-11-17 02:53:30.466338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.292 [2024-11-17 02:53:30.466357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.292 [2024-11-17 02:53:30.466388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.466991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 [2024-11-17 02:53:30.467690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:22.293 02:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3094915 00:33:28.856 { 00:33:28.856 "results": [ 00:33:28.856 { 00:33:28.856 "job": "NVMe0n1", 00:33:28.856 "core_mask": "0x1", 00:33:28.856 "workload": "verify", 00:33:28.856 "status": "finished", 00:33:28.856 "verify_range": { 00:33:28.856 "start": 0, 00:33:28.856 "length": 16384 00:33:28.856 }, 00:33:28.856 "queue_depth": 128, 00:33:28.856 "io_size": 4096, 00:33:28.856 "runtime": 15.006344, 00:33:28.856 "iops": 6008.925291863228, 00:33:28.856 "mibps": 23.472364421340735, 00:33:28.856 "io_failed": 12621, 00:33:28.856 "io_timeout": 0, 00:33:28.856 "avg_latency_us": 18652.708323372648, 00:33:28.856 "min_latency_us": 1080.1303703703704, 00:33:28.856 "max_latency_us": 22427.875555555554 00:33:28.856 } 00:33:28.856 ], 00:33:28.856 "core_count": 1 00:33:28.856 } 00:33:28.856 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3094664 00:33:28.856 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3094664 ']' 00:33:28.856 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3094664 00:33:28.856 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:28.856 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:28.856 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3094664 00:33:28.856 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:28.856 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:28.856 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3094664' 00:33:28.856 killing process with pid 3094664 00:33:28.856 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3094664 00:33:28.856 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3094664 00:33:28.856 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:28.856 [2024-11-17 02:53:19.009340] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:33:28.856 [2024-11-17 02:53:19.009536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3094664 ] 00:33:28.856 [2024-11-17 02:53:19.150022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.856 [2024-11-17 02:53:19.278281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.856 Running I/O for 15 seconds... 00:33:28.856 6176.00 IOPS, 24.12 MiB/s [2024-11-17T01:53:37.316Z] [2024-11-17 02:53:22.137300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.856 [2024-11-17 02:53:22.137358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.137431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.137472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.137498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.137519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.137543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.137564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.137587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.137608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.137630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.137651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.137674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.137695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.137717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.137739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.137762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.137782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.137805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.137826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.137848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.137869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.137899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.137921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.137961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.137982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.138004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.138026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.138048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.138068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.138115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.138138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.138178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.138201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.138226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.138248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.138271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.138293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.138318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.138340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.138364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.138386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.138426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.138447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.138486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.138508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.138531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.138556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.138580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.138602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.138624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.138645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.138667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.138689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.138711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:57304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.138732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.138755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.138776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.138798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.138820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.138842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.138863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.138886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:57336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.138906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.138929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.138950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.138972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.138993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.139015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:57360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.139036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.139059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.139079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.139130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.139154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.139178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.139199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.139223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.857 [2024-11-17 02:53:22.139245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.857 [2024-11-17 02:53:22.139268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.139290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.139313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.139334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.139358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.139379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.139402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.139438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.139461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.139481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.139504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.139525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.139548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.139569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.139591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.139612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.139634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.139655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.139694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.139720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.139745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.139767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.139790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.139812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.139835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.139856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.139880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.139902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.139926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.139947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.139970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.139991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.140014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.140036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.140059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.140080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.140128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.140153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.140178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.140200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.140224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.140246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.140271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.140293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.140316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.140343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.140368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.140390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.140430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.858 [2024-11-17 02:53:22.140451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.140476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.858 [2024-11-17 02:53:22.140498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.140522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.858 [2024-11-17 02:53:22.140543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.140566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.858 [2024-11-17 02:53:22.140588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.140611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.858 [2024-11-17 02:53:22.140632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.140654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.858 [2024-11-17 02:53:22.140676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.140700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.858 [2024-11-17 02:53:22.140721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.140744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.858 [2024-11-17 02:53:22.140765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.140788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.858 [2024-11-17 02:53:22.140810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.140833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.858 [2024-11-17 02:53:22.140854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.140877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.858 [2024-11-17 02:53:22.140899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.140927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.858 [2024-11-17 02:53:22.140949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.140973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.858 [2024-11-17 02:53:22.140995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.141033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.858 [2024-11-17 02:53:22.141055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.141079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.858 [2024-11-17 02:53:22.141124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.858 [2024-11-17 02:53:22.141152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.858 [2024-11-17 02:53:22.141175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.141199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.141222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.141247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.141270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.141294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.141317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.141342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.141364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.141388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.141426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.141450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.141472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.141496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.141518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.141541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.141567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.141592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.141614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.141638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.141660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.141683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.141704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.141728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.141750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.141774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.141795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.141819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.141841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.141864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.141886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.141909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.141930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.141954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.141976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.142000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.142021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.142044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.142066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.142122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.142147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.142176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.142199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.142223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.142246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.142270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.142292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.142316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.142339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.142363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.142385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.142426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.142447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.142471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.142493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.142517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.142539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.142563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.142584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.142608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.142630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.142654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.142676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.142699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.142720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.142744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.142766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.142794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.142816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.142840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.142863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.142887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.142909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.142932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.142954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.142977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.143000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.859 [2024-11-17 02:53:22.143023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.859 [2024-11-17 02:53:22.143044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:22.143068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:22.143115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:22.143143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:22.143166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:22.143190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:22.143213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:22.143237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:22.143259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:22.143284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:22.143307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:22.143331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:22.143354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:22.143378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:22.143405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:22.143447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:22.143469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:22.143493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:22.143515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:22.143559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.860 [2024-11-17 02:53:22.143583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.860 [2024-11-17 02:53:22.143602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58104 len:8 PRP1 0x0 PRP2 0x0 00:33:28.860 [2024-11-17 02:53:22.143623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:22.143904] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:28.860 [2024-11-17 02:53:22.143979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.860 [2024-11-17 02:53:22.144014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:22.144038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.860 [2024-11-17 02:53:22.144059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:22.144081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.860 [2024-11-17 02:53:22.144112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:22.144136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.860 [2024-11-17 02:53:22.144157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:22.144178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:33:28.860 [2024-11-17 02:53:22.144272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:28.860 [2024-11-17 02:53:22.147984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:28.860 [2024-11-17 02:53:22.187558] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:33:28.860 6029.50 IOPS, 23.55 MiB/s [2024-11-17T01:53:37.320Z] 6100.00 IOPS, 23.83 MiB/s [2024-11-17T01:53:37.320Z] 6125.25 IOPS, 23.93 MiB/s [2024-11-17T01:53:37.320Z] [2024-11-17 02:53:25.839130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:25.839189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:25.839231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:25.839273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:25.839307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:25.839330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:25.839354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:25.839376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:25.839421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:25.839457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:25.839480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:25.839502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:25.839525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:118992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:25.839546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:25.839568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:25.839588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:25.839610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:119008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:25.839631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:25.839653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:119016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:25.839674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:25.839696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:119024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:25.839716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:25.839738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:119032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:25.839760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:25.839783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:119040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:25.839804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:25.839826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:25.839847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:25.839869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:119056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:25.839894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:25.839918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:119064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:25.839939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:25.839962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:119072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:25.839982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:25.840006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:119080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:25.840026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:25.840048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:119088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:25.840069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:25.840126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:25.840150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:25.840175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:119104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:25.840197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.860 [2024-11-17 02:53:25.840222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:119112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.860 [2024-11-17 02:53:25.840244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.840268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.840291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.840316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.840338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.840362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:119136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.840399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.840430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.840467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.840490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:119152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.840511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.840542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:119160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.840564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.840586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:119168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.840607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.840630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.840651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.840674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.840695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.840717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.840738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.840760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:119200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.840780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.840803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:119208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.840824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.840846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:119216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.840866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.840889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.840909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.840931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:119232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.840952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.840974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:119240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.840995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.841017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:119248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.841037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.841060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:119256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.841116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.841158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:119264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.841181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.841205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.841227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.841251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.841273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.841297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.861 [2024-11-17 02:53:25.841319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.841343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.861 [2024-11-17 02:53:25.841365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.841405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.861 [2024-11-17 02:53:25.841430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.841469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.861 [2024-11-17 02:53:25.841490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.841512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:118752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.861 [2024-11-17 02:53:25.841533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.841555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.841576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.841598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:119296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.841618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.841640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:119304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.841661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.841684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:119312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.841704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.841726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:119320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.841750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.841773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:119328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.841794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.841816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.841836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.861 [2024-11-17 02:53:25.841859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.861 [2024-11-17 02:53:25.841879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.841902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:119352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.841922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.841944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:119360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.841965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.841987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.842008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.842031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:119376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.842051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.842073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.842137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.842180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:119392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.842203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.842226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:119400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.842248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.842271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.842293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.842317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:119416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.842339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.842367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:119424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.842421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.842461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.842483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.842506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.842527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.842549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:119448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.842569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.842591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:119456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.842612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.842635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.842656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.842678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.842698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.842721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:119480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.842742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.842764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.842785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.842806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:119496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.842828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.842849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.842870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.842892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:119512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.842912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.842936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:119520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.842960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.842984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:119528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.843006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.843028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:119536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.862 [2024-11-17 02:53:25.843048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.843128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.862 [2024-11-17 02:53:25.843174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119544 len:8 PRP1 0x0 PRP2 0x0 00:33:28.862 [2024-11-17 02:53:25.843196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.843224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.862 [2024-11-17 02:53:25.843244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.862 [2024-11-17 02:53:25.843262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119552 len:8 PRP1 0x0 PRP2 0x0 00:33:28.862 [2024-11-17 02:53:25.843282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.843301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.862 [2024-11-17 02:53:25.843318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.862 [2024-11-17 02:53:25.843336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119560 len:8 PRP1 0x0 PRP2 0x0 00:33:28.862 [2024-11-17 02:53:25.843355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.843388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.862 [2024-11-17 02:53:25.843414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.862 [2024-11-17 02:53:25.843431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119568 len:8 PRP1 0x0 PRP2 0x0 00:33:28.862 [2024-11-17 02:53:25.843466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.843485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.862 [2024-11-17 02:53:25.843500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.862 [2024-11-17 02:53:25.843516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119576 len:8 PRP1 0x0 PRP2 0x0 00:33:28.862 [2024-11-17 02:53:25.843535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.843553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.862 [2024-11-17 02:53:25.843568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.862 [2024-11-17 02:53:25.843585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119584 len:8 PRP1 0x0 PRP2 0x0 00:33:28.862 [2024-11-17 02:53:25.843603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.843621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.862 [2024-11-17 02:53:25.843641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.862 [2024-11-17 02:53:25.843674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119592 len:8 PRP1 0x0 PRP2 0x0 00:33:28.862 [2024-11-17 02:53:25.843694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.843713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.862 [2024-11-17 02:53:25.843729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.862 [2024-11-17 02:53:25.843746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119600 len:8 PRP1 0x0 PRP2 0x0 00:33:28.862 [2024-11-17 02:53:25.843765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.843784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.862 [2024-11-17 02:53:25.843800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.862 [2024-11-17 02:53:25.843817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119608 len:8 PRP1 0x0 PRP2 0x0 00:33:28.862 [2024-11-17 02:53:25.843836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.862 [2024-11-17 02:53:25.843854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.843870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.843887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119616 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.843905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.843924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.843940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.843956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119624 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.843974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.843993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.844009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.844026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119632 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.844044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.844063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.844114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.844134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119640 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.844154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.844174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.844191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.844208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119648 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.844227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.844251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.844268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.844285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119656 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.844304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.844324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.844341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.844358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119664 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.844377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.844395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.844438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.844456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119672 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.844475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.844494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.844510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.844527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119680 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.844545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.844564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.844580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.844597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119688 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.844615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.844634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.844650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.844666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119696 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.844685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.844703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.844719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.844736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119704 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.844754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.844773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.844789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.844805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119712 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.844828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.844848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.844864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.844881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119720 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.844900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.844919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.844936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.844952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119728 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.844971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.844990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.845006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.845024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118760 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.845043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.845076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.845124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.845146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118768 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.845166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.845186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.845203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.845221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118776 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.845240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.845260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.845284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.845302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118784 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.845322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.845342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.845358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.845375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118792 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.845395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.845441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.845457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.845479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118800 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.845499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.845519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.845535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.845552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118808 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.845571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.845590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.845606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.863 [2024-11-17 02:53:25.845623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118816 len:8 PRP1 0x0 PRP2 0x0 00:33:28.863 [2024-11-17 02:53:25.845642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.863 [2024-11-17 02:53:25.845661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.863 [2024-11-17 02:53:25.845677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.864 [2024-11-17 02:53:25.845694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118824 len:8 PRP1 0x0 PRP2 0x0 00:33:28.864 [2024-11-17 02:53:25.845713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:25.845732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.864 [2024-11-17 02:53:25.845748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.864 [2024-11-17 02:53:25.845765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118832 len:8 PRP1 0x0 PRP2 0x0 00:33:28.864 [2024-11-17 02:53:25.845783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:25.845802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.864 [2024-11-17 02:53:25.845819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.864 [2024-11-17 02:53:25.845836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118840 len:8 PRP1 0x0 PRP2 0x0 00:33:28.864 [2024-11-17 02:53:25.845855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:25.845874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.864 [2024-11-17 02:53:25.845892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.864 [2024-11-17 02:53:25.845911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118848 len:8 PRP1 0x0 PRP2 0x0 00:33:28.864 [2024-11-17 02:53:25.845930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:25.845949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.864 [2024-11-17 02:53:25.845965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.864 [2024-11-17 02:53:25.845984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118856 len:8 PRP1 0x0 PRP2 0x0 00:33:28.864 [2024-11-17 02:53:25.846003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:25.846023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.864 [2024-11-17 02:53:25.846043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.864 [2024-11-17 02:53:25.846062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118864 len:8 PRP1 0x0 PRP2 0x0 00:33:28.864 [2024-11-17 02:53:25.846092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:25.846139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.864 [2024-11-17 02:53:25.846158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.864 [2024-11-17 02:53:25.846176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118872 len:8 PRP1 0x0 PRP2 0x0 00:33:28.864 [2024-11-17 02:53:25.846196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:25.846216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.864 [2024-11-17 02:53:25.846233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.864 [2024-11-17 02:53:25.846251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119736 len:8 PRP1 0x0 PRP2 0x0 00:33:28.864 [2024-11-17 02:53:25.846271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:25.846291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.864 [2024-11-17 02:53:25.846308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.864 [2024-11-17 02:53:25.846326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118880 len:8 PRP1 0x0 PRP2 0x0 00:33:28.864 [2024-11-17 02:53:25.846346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:25.846366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.864 [2024-11-17 02:53:25.846383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.864 [2024-11-17 02:53:25.846423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118888 len:8 PRP1 0x0 PRP2 0x0 00:33:28.864 [2024-11-17 02:53:25.846443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:25.846468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.864 [2024-11-17 02:53:25.846484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.864 [2024-11-17 02:53:25.846501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118896 len:8 PRP1 0x0 PRP2 0x0 00:33:28.864 [2024-11-17 02:53:25.846520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:25.846540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.864 [2024-11-17 02:53:25.846557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.864 [2024-11-17 02:53:25.846574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118904 len:8 PRP1 0x0 PRP2 0x0 00:33:28.864 [2024-11-17 02:53:25.846593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:25.846612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.864 [2024-11-17 02:53:25.846628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.864 [2024-11-17 02:53:25.846646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118912 len:8 PRP1 0x0 PRP2 0x0 00:33:28.864 [2024-11-17 02:53:25.846665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:25.846688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.864 [2024-11-17 02:53:25.846705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.864 [2024-11-17 02:53:25.846723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118920 len:8 PRP1 0x0 PRP2 0x0 00:33:28.864 [2024-11-17 02:53:25.846742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:25.846762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.864 [2024-11-17 02:53:25.846779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.864 [2024-11-17 02:53:25.846796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118928 len:8 PRP1 0x0 PRP2 0x0 00:33:28.864 [2024-11-17 02:53:25.846815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:25.846834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.864 [2024-11-17 02:53:25.846851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.864 [2024-11-17 02:53:25.846868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118936 len:8 PRP1 0x0 PRP2 0x0 00:33:28.864 [2024-11-17 02:53:25.846887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:25.847185] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:28.864 [2024-11-17 02:53:25.847246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.864 [2024-11-17 02:53:25.847273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:25.847298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.864 [2024-11-17 02:53:25.847319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:25.847340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.864 [2024-11-17 02:53:25.847361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:25.847382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.864 [2024-11-17 02:53:25.847412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:25.847433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:33:28.864 [2024-11-17 02:53:25.847516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:28.864 [2024-11-17 02:53:25.851298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:33:28.864 [2024-11-17 02:53:25.965528] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:33:28.864 5973.60 IOPS, 23.33 MiB/s [2024-11-17T01:53:37.324Z] 5973.50 IOPS, 23.33 MiB/s [2024-11-17T01:53:37.324Z] 5985.57 IOPS, 23.38 MiB/s [2024-11-17T01:53:37.324Z] 5997.75 IOPS, 23.43 MiB/s [2024-11-17T01:53:37.324Z] 5998.22 IOPS, 23.43 MiB/s [2024-11-17T01:53:37.324Z] [2024-11-17 02:53:30.469284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.864 [2024-11-17 02:53:30.469342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:30.469402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.864 [2024-11-17 02:53:30.469444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:30.469470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.864 [2024-11-17 02:53:30.469493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:30.469516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.864 [2024-11-17 02:53:30.469538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:30.469561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.864 [2024-11-17 02:53:30.469581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.864 [2024-11-17 02:53:30.469604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.864 [2024-11-17 02:53:30.469625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.469648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.865 [2024-11-17 02:53:30.469670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.469693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.865 [2024-11-17 02:53:30.469714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.469737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.865 [2024-11-17 02:53:30.469758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.469782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.469803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.469826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.469847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.469870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.469907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.469932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.469954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.469976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.470002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.470026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.470048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.470070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.470119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.470145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.470166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.470190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.470211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.470234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.470255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.470279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.470301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.470324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.470345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.470368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.470389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.470438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.470459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.470482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.470502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.470534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.470555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.470578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.470599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.470626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.470647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.470670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.470691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.470714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.470735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.470758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.470779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.470802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.470823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.470845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.470867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.470889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.470910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.470933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.470954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.470976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.470997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.471019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.471040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.471063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.471107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.471136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.471159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.471183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.471213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.471240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.471263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.471288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.471310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.865 [2024-11-17 02:53:30.471334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.865 [2024-11-17 02:53:30.471356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.471395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.471418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.471442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.471479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.471501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.471522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.471545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.471566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.471589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.471610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.471633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.471654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.471677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.471698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.471721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:106080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.471742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.471780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:106088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.471803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.471831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.471853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.471877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.471898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.471922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.471944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.471968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.471991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.472014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.472037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.472060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.866 [2024-11-17 02:53:30.472106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.472135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.866 [2024-11-17 02:53:30.472158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.472182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.866 [2024-11-17 02:53:30.472204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.472228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.866 [2024-11-17 02:53:30.472250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.472274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.866 [2024-11-17 02:53:30.472296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.472320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.866 [2024-11-17 02:53:30.472342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.472366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.866 [2024-11-17 02:53:30.472403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.472426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.866 [2024-11-17 02:53:30.472452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.472478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.866 [2024-11-17 02:53:30.472500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.472524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.472546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.472569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.472590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.472613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.472635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.472659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.472681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.472704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.472726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.472749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.472771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.472793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:106184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.472815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.472839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:106192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.472861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.472884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.472905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.472929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.472950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.472991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.473025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.473051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.473078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.473112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.473136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.473159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.473182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.473206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.473228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.473253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.866 [2024-11-17 02:53:30.473275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.866 [2024-11-17 02:53:30.473299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.473321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.473346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.473368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.473392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.473414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.473454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.473477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.473500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.473522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.473546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.473567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.473591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.473613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.473637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.473658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.473687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.473709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.473734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.473755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.473779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.473801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.473824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.473846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.473869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.473891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.473914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.473936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.473959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.473980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.474004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.474025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.474049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.474070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.474119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.474143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.474167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.474189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.474213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.474236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.474260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.474287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.474313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.474335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.474360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.474399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.474424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.474446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.474469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.474491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.474515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.474536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.474559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.474581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.474604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.474626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.474649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.474672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.474696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.474718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.474742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.474763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.474786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.867 [2024-11-17 02:53:30.474808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.474852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.867 [2024-11-17 02:53:30.474878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106520 len:8 PRP1 0x0 PRP2 0x0 00:33:28.867 [2024-11-17 02:53:30.474898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.474930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.867 [2024-11-17 02:53:30.474950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.867 [2024-11-17 02:53:30.474968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106528 len:8 PRP1 0x0 PRP2 0x0 00:33:28.867 [2024-11-17 02:53:30.474988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.475008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.867 [2024-11-17 02:53:30.475025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.867 [2024-11-17 02:53:30.475044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106536 len:8 PRP1 0x0 PRP2 0x0 00:33:28.867 [2024-11-17 02:53:30.475063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.475106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.867 [2024-11-17 02:53:30.475126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.867 [2024-11-17 02:53:30.475144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106544 len:8 PRP1 0x0 PRP2 0x0 00:33:28.867 [2024-11-17 02:53:30.475163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.867 [2024-11-17 02:53:30.475182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.867 [2024-11-17 02:53:30.475200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.868 [2024-11-17 02:53:30.475217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106552 len:8 PRP1 0x0 PRP2 0x0 00:33:28.868 [2024-11-17 02:53:30.475236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.868 [2024-11-17 02:53:30.475254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.868 [2024-11-17 02:53:30.475271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.868 [2024-11-17 02:53:30.475289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106560 len:8 PRP1 0x0 PRP2 0x0 00:33:28.868 [2024-11-17 02:53:30.475308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.868 [2024-11-17 02:53:30.475326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.868 [2024-11-17 02:53:30.475343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.868 [2024-11-17 02:53:30.475360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106568 len:8 PRP1 0x0 PRP2 0x0 00:33:28.868 [2024-11-17 02:53:30.475380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.868 [2024-11-17 02:53:30.475413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.868 [2024-11-17 02:53:30.475430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.868 [2024-11-17 02:53:30.475447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106576 len:8 PRP1 0x0 PRP2 0x0 00:33:28.868 [2024-11-17 02:53:30.475465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.868 [2024-11-17 02:53:30.475484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.868 [2024-11-17 02:53:30.475500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.868 [2024-11-17 02:53:30.475516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106584 len:8 PRP1 0x0 PRP2 0x0 00:33:28.868 [2024-11-17 02:53:30.475539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.868 [2024-11-17 02:53:30.475559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.868 [2024-11-17 02:53:30.475575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.868 [2024-11-17 02:53:30.475592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106592 len:8 PRP1 0x0 PRP2 0x0 00:33:28.868 [2024-11-17 02:53:30.475610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.868 [2024-11-17 02:53:30.475628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.868 [2024-11-17 02:53:30.475644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.868 [2024-11-17 02:53:30.475662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106600 len:8 PRP1 0x0 PRP2 0x0 00:33:28.868 [2024-11-17 02:53:30.475681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.868 [2024-11-17 02:53:30.475699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.868 [2024-11-17 02:53:30.475715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.868 [2024-11-17 02:53:30.475732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106608 len:8 PRP1 0x0 PRP2 0x0 00:33:28.868 [2024-11-17 02:53:30.475750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.868 [2024-11-17 02:53:30.475769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.868 [2024-11-17 02:53:30.475785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.868 [2024-11-17 02:53:30.475802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106616 len:8 PRP1 0x0 PRP2 0x0 00:33:28.868 [2024-11-17 02:53:30.475820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.868 [2024-11-17 02:53:30.475838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.868 [2024-11-17 02:53:30.475855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.868 [2024-11-17 02:53:30.475911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106624 len:8 PRP1 0x0 PRP2 0x0 00:33:28.868 [2024-11-17 02:53:30.475932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.868 [2024-11-17 02:53:30.475953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.868 [2024-11-17 02:53:30.475970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.868 [2024-11-17 02:53:30.475987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106632 len:8 PRP1 0x0 PRP2 0x0 00:33:28.868 [2024-11-17 02:53:30.476006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.868 [2024-11-17 02:53:30.476309] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:28.868 [2024-11-17 02:53:30.476369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.868 [2024-11-17 02:53:30.476396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.868 [2024-11-17 02:53:30.476420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.868 [2024-11-17 02:53:30.476446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.868 [2024-11-17 02:53:30.476469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.868 [2024-11-17 02:53:30.476490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.868 [2024-11-17 02:53:30.476511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.868 [2024-11-17 02:53:30.476532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.868 [2024-11-17 02:53:30.476552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:33:28.868 [2024-11-17 02:53:30.476634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:28.868 [2024-11-17 02:53:30.480357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:33:28.868 [2024-11-17 02:53:30.682513] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:33:28.868 5880.10 IOPS, 22.97 MiB/s [2024-11-17T01:53:37.328Z] 5910.91 IOPS, 23.09 MiB/s [2024-11-17T01:53:37.328Z] 5932.58 IOPS, 23.17 MiB/s [2024-11-17T01:53:37.328Z] 5964.23 IOPS, 23.30 MiB/s [2024-11-17T01:53:37.328Z] 5988.43 IOPS, 23.39 MiB/s 00:33:28.868 Latency(us) 00:33:28.868 [2024-11-17T01:53:37.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.868 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:28.868 Verification LBA range: start 0x0 length 0x4000 00:33:28.868 NVMe0n1 : 15.01 6008.93 23.47 841.04 0.00 18652.71 1080.13 22427.88 00:33:28.868 [2024-11-17T01:53:37.328Z] =================================================================================================================== 00:33:28.868 [2024-11-17T01:53:37.328Z] Total : 6008.93 23.47 841.04 0.00 18652.71 1080.13 22427.88 00:33:28.868 Received shutdown signal, test time was about 15.000000 seconds 00:33:28.868 00:33:28.868 Latency(us) 00:33:28.868 [2024-11-17T01:53:37.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.868 [2024-11-17T01:53:37.328Z] =================================================================================================================== 00:33:28.868 [2024-11-17T01:53:37.328Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:28.868 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:28.868 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:28.868 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:28.868 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3096763 00:33:28.868 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:28.868 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3096763 /var/tmp/bdevperf.sock 00:33:28.868 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3096763 ']' 00:33:28.868 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:28.868 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:28.868 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:28.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:28.868 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:28.868 02:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:29.802 02:53:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:29.802 02:53:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:29.802 02:53:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:29.802 [2024-11-17 02:53:38.230688] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:29.802 02:53:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:30.060 [2024-11-17 02:53:38.503513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:30.317 02:53:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:30.575 NVMe0n1 00:33:30.575 02:53:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:30.832 00:33:30.832 02:53:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:31.396 00:33:31.396 02:53:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:31.396 02:53:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:31.653 02:53:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:31.910 02:53:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:35.186 02:53:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:35.186 02:53:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:35.186 02:53:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3097555 00:33:35.186 02:53:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:35.186 02:53:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3097555 00:33:36.561 { 00:33:36.561 "results": [ 00:33:36.561 { 00:33:36.561 "job": "NVMe0n1", 00:33:36.561 "core_mask": "0x1", 00:33:36.561 "workload": "verify", 00:33:36.561 "status": "finished", 00:33:36.561 "verify_range": { 00:33:36.561 "start": 0, 00:33:36.561 "length": 16384 00:33:36.561 }, 00:33:36.561 "queue_depth": 128, 00:33:36.561 "io_size": 4096, 00:33:36.561 "runtime": 1.020945, 00:33:36.561 "iops": 6178.589444093463, 00:33:36.561 "mibps": 24.13511501599009, 00:33:36.561 "io_failed": 0, 00:33:36.561 "io_timeout": 0, 00:33:36.561 "avg_latency_us": 20617.982070034526, 00:33:36.561 "min_latency_us": 4466.157037037037, 00:33:36.561 "max_latency_us": 18447.17037037037 00:33:36.561 } 00:33:36.561 ], 00:33:36.561 "core_count": 1 00:33:36.561 } 00:33:36.561 02:53:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:36.561 [2024-11-17 02:53:36.974565] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:33:36.561 [2024-11-17 02:53:36.974718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3096763 ] 00:33:36.561 [2024-11-17 02:53:37.110779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.561 [2024-11-17 02:53:37.237612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:36.561 [2024-11-17 02:53:40.240581] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:36.561 [2024-11-17 02:53:40.240738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:36.561 [2024-11-17 02:53:40.240794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.561 [2024-11-17 02:53:40.240827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:36.561 [2024-11-17 02:53:40.240848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.561 [2024-11-17 02:53:40.240871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:36.561 [2024-11-17 02:53:40.240892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.561 [2024-11-17 02:53:40.240915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:36.561 [2024-11-17 02:53:40.240936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.561 [2024-11-17 02:53:40.240960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:33:36.561 [2024-11-17 02:53:40.241070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:33:36.561 [2024-11-17 02:53:40.241137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:36.561 [2024-11-17 02:53:40.262166] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:33:36.561 Running I/O for 1 seconds... 00:33:36.561 6180.00 IOPS, 24.14 MiB/s 00:33:36.561 Latency(us) 00:33:36.561 [2024-11-17T01:53:45.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.561 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:36.561 Verification LBA range: start 0x0 length 0x4000 00:33:36.561 NVMe0n1 : 1.02 6178.59 24.14 0.00 0.00 20617.98 4466.16 18447.17 00:33:36.561 [2024-11-17T01:53:45.021Z] =================================================================================================================== 00:33:36.561 [2024-11-17T01:53:45.021Z] Total : 6178.59 24.14 0.00 0.00 20617.98 4466.16 18447.17 00:33:36.561 02:53:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:36.561 02:53:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:36.561 02:53:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:36.819 02:53:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:36.819 02:53:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:37.076 02:53:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:37.334 02:53:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:40.611 02:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:40.611 02:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:40.611 02:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3096763 00:33:40.611 02:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3096763 ']' 00:33:40.611 02:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3096763 00:33:40.611 02:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:40.611 02:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:40.611 02:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3096763 00:33:40.869 02:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:40.869 02:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:40.869 02:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3096763' 00:33:40.869 killing process with pid 3096763 00:33:40.869 02:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3096763 00:33:40.869 02:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3096763 00:33:41.802 02:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:41.802 02:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:41.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:41.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:41.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:41.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:41.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:33:41.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:41.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:33:41.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:41.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:41.802 rmmod nvme_tcp 00:33:42.060 rmmod nvme_fabrics 00:33:42.060 rmmod nvme_keyring 00:33:42.060 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:42.060 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:33:42.060 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:33:42.060 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3094349 ']' 00:33:42.060 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3094349 00:33:42.060 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3094349 ']' 00:33:42.060 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3094349 00:33:42.060 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:42.060 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:42.060 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3094349 00:33:42.060 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:42.060 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:42.060 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3094349' 00:33:42.060 killing process with pid 3094349 00:33:42.060 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3094349 00:33:42.060 02:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3094349 00:33:43.434 02:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:43.434 02:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:43.434 02:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:43.434 02:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:33:43.434 02:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:33:43.434 02:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:33:43.434 02:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:43.434 02:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:43.434 02:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:43.434 02:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.434 02:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:43.434 02:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:45.338 00:33:45.338 real 0m40.113s 00:33:45.338 user 2m21.342s 00:33:45.338 sys 0m6.235s 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:45.338 ************************************ 00:33:45.338 END TEST nvmf_failover 00:33:45.338 ************************************ 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.338 ************************************ 00:33:45.338 START TEST nvmf_host_discovery 00:33:45.338 ************************************ 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:45.338 * Looking for test storage... 00:33:45.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:45.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.338 --rc genhtml_branch_coverage=1 00:33:45.338 --rc genhtml_function_coverage=1 00:33:45.338 --rc genhtml_legend=1 00:33:45.338 --rc geninfo_all_blocks=1 00:33:45.338 --rc geninfo_unexecuted_blocks=1 00:33:45.338 00:33:45.338 ' 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:45.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.338 --rc genhtml_branch_coverage=1 00:33:45.338 --rc genhtml_function_coverage=1 00:33:45.338 --rc genhtml_legend=1 00:33:45.338 --rc geninfo_all_blocks=1 00:33:45.338 --rc geninfo_unexecuted_blocks=1 00:33:45.338 00:33:45.338 ' 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:45.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.338 --rc genhtml_branch_coverage=1 00:33:45.338 --rc genhtml_function_coverage=1 00:33:45.338 --rc genhtml_legend=1 00:33:45.338 --rc geninfo_all_blocks=1 00:33:45.338 --rc geninfo_unexecuted_blocks=1 00:33:45.338 00:33:45.338 ' 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:45.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.338 --rc genhtml_branch_coverage=1 00:33:45.338 --rc genhtml_function_coverage=1 00:33:45.338 --rc genhtml_legend=1 00:33:45.338 --rc geninfo_all_blocks=1 00:33:45.338 --rc geninfo_unexecuted_blocks=1 00:33:45.338 00:33:45.338 ' 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:45.338 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:45.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:33:45.596 02:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:47.497 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:47.497 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:47.497 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:47.498 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:47.498 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:47.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:47.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:33:47.498 00:33:47.498 --- 10.0.0.2 ping statistics --- 00:33:47.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.498 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:47.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:47.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:33:47.498 00:33:47.498 --- 10.0.0.1 ping statistics --- 00:33:47.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.498 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3100423 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3100423 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3100423 ']' 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:47.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:47.498 02:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.757 [2024-11-17 02:53:55.998688] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:33:47.757 [2024-11-17 02:53:55.998850] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:47.757 [2024-11-17 02:53:56.145296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.016 [2024-11-17 02:53:56.269397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:48.016 [2024-11-17 02:53:56.269491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:48.016 [2024-11-17 02:53:56.269516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:48.016 [2024-11-17 02:53:56.269540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:48.016 [2024-11-17 02:53:56.269560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:48.016 [2024-11-17 02:53:56.271197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:48.583 02:53:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:48.583 02:53:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:48.583 02:53:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:48.583 02:53:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:48.583 02:53:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.583 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:48.583 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:48.583 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.583 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.583 [2024-11-17 02:53:57.010367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:48.583 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.583 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:48.583 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.583 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.583 [2024-11-17 02:53:57.018598] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:48.583 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.583 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:48.583 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.583 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.583 null0 00:33:48.583 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.583 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:48.583 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.583 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.583 null1 00:33:48.583 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.583 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:48.583 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.583 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.841 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.841 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3100573 00:33:48.841 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3100573 /tmp/host.sock 00:33:48.841 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:48.841 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3100573 ']' 00:33:48.841 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:48.841 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:48.841 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:48.841 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:48.841 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:48.841 02:53:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.841 [2024-11-17 02:53:57.137513] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:33:48.841 [2024-11-17 02:53:57.137666] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3100573 ] 00:33:48.841 [2024-11-17 02:53:57.273346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.099 [2024-11-17 02:53:57.394657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.033 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:50.034 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:50.034 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.034 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.034 [2024-11-17 02:53:58.422576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:50.034 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.034 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:50.034 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:50.034 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:50.034 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.034 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:50.034 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.034 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:50.034 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.034 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:50.034 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:50.034 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:50.034 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:50.034 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.034 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.034 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:50.034 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:50.034 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.291 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:50.291 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:50.291 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:50.291 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:33:50.292 02:53:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:50.857 [2024-11-17 02:53:59.214044] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:50.857 [2024-11-17 02:53:59.214105] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:50.857 [2024-11-17 02:53:59.214165] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:51.115 [2024-11-17 02:53:59.340630] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:51.115 [2024-11-17 02:53:59.561365] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:51.115 [2024-11-17 02:53:59.563197] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2a00:1 started. 00:33:51.115 [2024-11-17 02:53:59.565679] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:51.115 [2024-11-17 02:53:59.565717] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:51.115 [2024-11-17 02:53:59.571094] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2a00 was disconnected and freed. delete nvme_qpair. 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:51.374 [2024-11-17 02:53:59.775341] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2c80:1 started. 00:33:51.374 [2024-11-17 02:53:59.781264] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2c80 was disconnected and freed. delete nvme_qpair. 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.374 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.633 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:51.633 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:51.633 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:51.633 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:51.633 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:51.633 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.633 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.633 [2024-11-17 02:53:59.855862] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:51.633 [2024-11-17 02:53:59.856239] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:51.633 [2024-11-17 02:53:59.856293] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:51.633 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.633 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:51.633 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:51.633 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:51.634 02:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:51.634 [2024-11-17 02:53:59.983342] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:51.634 [2024-11-17 02:54:00.085746] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:33:51.634 [2024-11-17 02:54:00.085880] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:51.634 [2024-11-17 02:54:00.085914] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:51.634 [2024-11-17 02:54:00.085933] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:52.647 02:54:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:52.647 02:54:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:52.647 02:54:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:52.647 02:54:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:52.647 02:54:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:52.647 02:54:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.647 02:54:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.647 02:54:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:52.647 02:54:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:52.647 02:54:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.647 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:52.647 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:52.647 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:52.647 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:52.647 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:52.647 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:52.647 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:52.647 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:52.647 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:52.647 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:52.647 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:52.647 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.647 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:52.647 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.647 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.647 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:52.648 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:52.648 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:52.648 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:52.648 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:52.648 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.648 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.648 [2024-11-17 02:54:01.068928] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:52.648 [2024-11-17 02:54:01.068997] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:52.648 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.648 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:52.648 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:52.648 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:52.648 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:52.648 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:52.648 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:52.648 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:52.648 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:52.648 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.648 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.648 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:52.648 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:52.648 [2024-11-17 02:54:01.076372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:52.648 [2024-11-17 02:54:01.076420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.648 [2024-11-17 02:54:01.076457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:52.648 [2024-11-17 02:54:01.076481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.648 [2024-11-17 02:54:01.076503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:52.648 [2024-11-17 02:54:01.076523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.648 [2024-11-17 02:54:01.076546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:52.648 [2024-11-17 02:54:01.076568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.648 [2024-11-17 02:54:01.076589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:52.648 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.648 [2024-11-17 02:54:01.086359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:52.648 [2024-11-17 02:54:01.096417] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:52.648 [2024-11-17 02:54:01.096474] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:52.648 [2024-11-17 02:54:01.096495] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:52.648 [2024-11-17 02:54:01.096510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:52.648 [2024-11-17 02:54:01.096576] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:52.648 [2024-11-17 02:54:01.096819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.648 [2024-11-17 02:54:01.096861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:52.648 [2024-11-17 02:54:01.096887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:52.648 [2024-11-17 02:54:01.096923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:52.648 [2024-11-17 02:54:01.096989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:52.648 [2024-11-17 02:54:01.097018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:52.648 [2024-11-17 02:54:01.097067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:52.648 [2024-11-17 02:54:01.097088] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:52.648 [2024-11-17 02:54:01.097133] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:52.648 [2024-11-17 02:54:01.097149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:52.648 [2024-11-17 02:54:01.106618] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:52.648 [2024-11-17 02:54:01.106651] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:52.648 [2024-11-17 02:54:01.106666] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:52.648 [2024-11-17 02:54:01.106679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:52.648 [2024-11-17 02:54:01.106720] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:52.648 [2024-11-17 02:54:01.106875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.648 [2024-11-17 02:54:01.106913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:52.648 [2024-11-17 02:54:01.106937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:52.648 [2024-11-17 02:54:01.106971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:52.648 [2024-11-17 02:54:01.107003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:52.648 [2024-11-17 02:54:01.107025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:52.648 [2024-11-17 02:54:01.107045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:52.907 [2024-11-17 02:54:01.107064] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:52.907 [2024-11-17 02:54:01.107079] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:52.907 [2024-11-17 02:54:01.107091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:52.907 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:52.907 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:52.907 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:52.907 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:52.907 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:52.907 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:52.907 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:52.907 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:52.907 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:52.907 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.907 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:52.907 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.907 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:52.907 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:52.907 [2024-11-17 02:54:01.116775] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:52.907 [2024-11-17 02:54:01.116817] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:52.907 [2024-11-17 02:54:01.116837] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:52.907 [2024-11-17 02:54:01.116858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:52.907 [2024-11-17 02:54:01.116902] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:52.907 [2024-11-17 02:54:01.117102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-11-17 02:54:01.117161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:52.907 [2024-11-17 02:54:01.117192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:52.907 [2024-11-17 02:54:01.117227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:52.907 [2024-11-17 02:54:01.117276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:52.907 [2024-11-17 02:54:01.117305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:52.907 [2024-11-17 02:54:01.117327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:52.907 [2024-11-17 02:54:01.117347] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:52.907 [2024-11-17 02:54:01.117362] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:52.907 [2024-11-17 02:54:01.117392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:52.907 [2024-11-17 02:54:01.126942] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:52.907 [2024-11-17 02:54:01.126978] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:52.907 [2024-11-17 02:54:01.126993] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:52.907 [2024-11-17 02:54:01.127005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:52.907 [2024-11-17 02:54:01.127041] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:52.907 [2024-11-17 02:54:01.127235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-11-17 02:54:01.127274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:52.907 [2024-11-17 02:54:01.127299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:52.907 [2024-11-17 02:54:01.127333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:52.907 [2024-11-17 02:54:01.127365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:52.907 [2024-11-17 02:54:01.127387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:52.907 [2024-11-17 02:54:01.127417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:52.907 [2024-11-17 02:54:01.127436] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:52.907 [2024-11-17 02:54:01.127465] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:52.907 [2024-11-17 02:54:01.127477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:52.907 [2024-11-17 02:54:01.137104] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:52.907 [2024-11-17 02:54:01.137144] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:52.907 [2024-11-17 02:54:01.137160] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:52.907 [2024-11-17 02:54:01.137173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:52.907 [2024-11-17 02:54:01.137210] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:52.907 [2024-11-17 02:54:01.137344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-11-17 02:54:01.137397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:52.907 [2024-11-17 02:54:01.137435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:52.907 [2024-11-17 02:54:01.137470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:52.907 [2024-11-17 02:54:01.137501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:52.907 [2024-11-17 02:54:01.137523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:52.907 [2024-11-17 02:54:01.137542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:52.907 [2024-11-17 02:54:01.137561] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:52.907 [2024-11-17 02:54:01.137576] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:52.907 [2024-11-17 02:54:01.137607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:52.907 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.907 [2024-11-17 02:54:01.147252] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:52.907 [2024-11-17 02:54:01.147285] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:52.907 [2024-11-17 02:54:01.147301] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:52.907 [2024-11-17 02:54:01.147313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:52.907 [2024-11-17 02:54:01.147360] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:52.907 [2024-11-17 02:54:01.147601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.907 [2024-11-17 02:54:01.147640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:52.907 [2024-11-17 02:54:01.147665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:52.907 [2024-11-17 02:54:01.147699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:52.907 [2024-11-17 02:54:01.147747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:52.907 [2024-11-17 02:54:01.147773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:52.907 [2024-11-17 02:54:01.147793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:52.907 [2024-11-17 02:54:01.147811] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:52.907 [2024-11-17 02:54:01.147841] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:52.907 [2024-11-17 02:54:01.147853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:52.907 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:52.907 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:52.907 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:52.907 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:52.907 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:52.907 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:52.908 [2024-11-17 02:54:01.156806] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:52.908 [2024-11-17 02:54:01.156858] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.908 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.166 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:53.166 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:53.166 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:53.166 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:53.166 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:53.166 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.166 02:54:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.098 [2024-11-17 02:54:02.420896] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:54.098 [2024-11-17 02:54:02.420957] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:54.098 [2024-11-17 02:54:02.421013] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:54.098 [2024-11-17 02:54:02.549465] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:54.664 [2024-11-17 02:54:02.854706] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:33:54.664 [2024-11-17 02:54:02.856607] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x6150001f3e00:1 started. 00:33:54.664 [2024-11-17 02:54:02.859617] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:54.664 [2024-11-17 02:54:02.859693] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:54.664 [2024-11-17 02:54:02.862503] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x6150001f3e00 was disconnected and freed. delete nvme_qpair. 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.664 request: 00:33:54.664 { 00:33:54.664 "name": "nvme", 00:33:54.664 "trtype": "tcp", 00:33:54.664 "traddr": "10.0.0.2", 00:33:54.664 "adrfam": "ipv4", 00:33:54.664 "trsvcid": "8009", 00:33:54.664 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:54.664 "wait_for_attach": true, 00:33:54.664 "method": "bdev_nvme_start_discovery", 00:33:54.664 "req_id": 1 00:33:54.664 } 00:33:54.664 Got JSON-RPC error response 00:33:54.664 response: 00:33:54.664 { 00:33:54.664 "code": -17, 00:33:54.664 "message": "File exists" 00:33:54.664 } 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:54.664 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.665 request: 00:33:54.665 { 00:33:54.665 "name": "nvme_second", 00:33:54.665 "trtype": "tcp", 00:33:54.665 "traddr": "10.0.0.2", 00:33:54.665 "adrfam": "ipv4", 00:33:54.665 "trsvcid": "8009", 00:33:54.665 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:54.665 "wait_for_attach": true, 00:33:54.665 "method": "bdev_nvme_start_discovery", 00:33:54.665 "req_id": 1 00:33:54.665 } 00:33:54.665 Got JSON-RPC error response 00:33:54.665 response: 00:33:54.665 { 00:33:54.665 "code": -17, 00:33:54.665 "message": "File exists" 00:33:54.665 } 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:54.665 02:54:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.665 02:54:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:54.665 02:54:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:54.665 02:54:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:54.665 02:54:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:54.665 02:54:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.665 02:54:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.665 02:54:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:54.665 02:54:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:54.665 02:54:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.665 02:54:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:54.665 02:54:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:54.665 02:54:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:54.665 02:54:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:54.665 02:54:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:54.665 02:54:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:54.665 02:54:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:54.665 02:54:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:54.665 02:54:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:54.665 02:54:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.665 02:54:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.037 [2024-11-17 02:54:04.071578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.037 [2024-11-17 02:54:04.071659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4080 with addr=10.0.0.2, port=8010 00:33:56.037 [2024-11-17 02:54:04.071742] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:56.037 [2024-11-17 02:54:04.071771] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:56.037 [2024-11-17 02:54:04.071795] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:56.971 [2024-11-17 02:54:05.073959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.971 [2024-11-17 02:54:05.074012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=8010 00:33:56.971 [2024-11-17 02:54:05.074080] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:56.971 [2024-11-17 02:54:05.074113] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:56.971 [2024-11-17 02:54:05.074163] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:57.906 [2024-11-17 02:54:06.076116] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:57.906 request: 00:33:57.906 { 00:33:57.906 "name": "nvme_second", 00:33:57.906 "trtype": "tcp", 00:33:57.906 "traddr": "10.0.0.2", 00:33:57.906 "adrfam": "ipv4", 00:33:57.906 "trsvcid": "8010", 00:33:57.906 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:57.906 "wait_for_attach": false, 00:33:57.906 "attach_timeout_ms": 3000, 00:33:57.906 "method": "bdev_nvme_start_discovery", 00:33:57.906 "req_id": 1 00:33:57.906 } 00:33:57.906 Got JSON-RPC error response 00:33:57.906 response: 00:33:57.906 { 00:33:57.906 "code": -110, 00:33:57.906 "message": "Connection timed out" 00:33:57.906 } 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3100573 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:57.906 rmmod nvme_tcp 00:33:57.906 rmmod nvme_fabrics 00:33:57.906 rmmod nvme_keyring 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3100423 ']' 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3100423 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3100423 ']' 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3100423 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3100423 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3100423' 00:33:57.906 killing process with pid 3100423 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3100423 00:33:57.906 02:54:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3100423 00:33:59.282 02:54:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:59.282 02:54:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:59.282 02:54:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:59.282 02:54:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:33:59.282 02:54:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:33:59.282 02:54:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:59.282 02:54:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:33:59.282 02:54:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:59.282 02:54:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:59.282 02:54:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:59.282 02:54:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:59.282 02:54:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:01.184 00:34:01.184 real 0m15.734s 00:34:01.184 user 0m23.509s 00:34:01.184 sys 0m3.004s 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.184 ************************************ 00:34:01.184 END TEST nvmf_host_discovery 00:34:01.184 ************************************ 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.184 ************************************ 00:34:01.184 START TEST nvmf_host_multipath_status 00:34:01.184 ************************************ 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:01.184 * Looking for test storage... 00:34:01.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:01.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.184 --rc genhtml_branch_coverage=1 00:34:01.184 --rc genhtml_function_coverage=1 00:34:01.184 --rc genhtml_legend=1 00:34:01.184 --rc geninfo_all_blocks=1 00:34:01.184 --rc geninfo_unexecuted_blocks=1 00:34:01.184 00:34:01.184 ' 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:01.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.184 --rc genhtml_branch_coverage=1 00:34:01.184 --rc genhtml_function_coverage=1 00:34:01.184 --rc genhtml_legend=1 00:34:01.184 --rc geninfo_all_blocks=1 00:34:01.184 --rc geninfo_unexecuted_blocks=1 00:34:01.184 00:34:01.184 ' 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:01.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.184 --rc genhtml_branch_coverage=1 00:34:01.184 --rc genhtml_function_coverage=1 00:34:01.184 --rc genhtml_legend=1 00:34:01.184 --rc geninfo_all_blocks=1 00:34:01.184 --rc geninfo_unexecuted_blocks=1 00:34:01.184 00:34:01.184 ' 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:01.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.184 --rc genhtml_branch_coverage=1 00:34:01.184 --rc genhtml_function_coverage=1 00:34:01.184 --rc genhtml_legend=1 00:34:01.184 --rc geninfo_all_blocks=1 00:34:01.184 --rc geninfo_unexecuted_blocks=1 00:34:01.184 00:34:01.184 ' 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:01.184 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:01.185 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:34:01.185 02:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:03.085 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:03.085 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:34:03.085 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:03.085 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:03.085 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:03.085 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:03.085 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:03.085 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:34:03.085 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:03.085 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:34:03.085 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:34:03.085 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:34:03.085 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:34:03.085 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:34:03.085 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:34:03.085 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:03.085 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:03.086 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:03.086 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:03.086 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:03.343 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:03.343 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:03.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:03.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:34:03.343 00:34:03.343 --- 10.0.0.2 ping statistics --- 00:34:03.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:03.343 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:03.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:03.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:34:03.343 00:34:03.343 --- 10.0.0.1 ping statistics --- 00:34:03.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:03.343 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:03.343 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:34:03.344 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:03.344 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:03.344 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:03.344 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:03.344 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:03.344 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:03.344 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:03.344 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:03.344 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:03.344 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:03.344 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:03.344 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3104483 00:34:03.344 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:03.344 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3104483 00:34:03.344 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3104483 ']' 00:34:03.344 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:03.344 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:03.344 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:03.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:03.344 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:03.344 02:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:03.344 [2024-11-17 02:54:11.790463] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:34:03.344 [2024-11-17 02:54:11.790604] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:03.601 [2024-11-17 02:54:11.941945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:03.858 [2024-11-17 02:54:12.081368] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:03.858 [2024-11-17 02:54:12.081455] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:03.858 [2024-11-17 02:54:12.081482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:03.858 [2024-11-17 02:54:12.081512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:03.858 [2024-11-17 02:54:12.081532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:03.858 [2024-11-17 02:54:12.084088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:03.858 [2024-11-17 02:54:12.084094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:04.423 02:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:04.423 02:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:04.423 02:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:04.423 02:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:04.423 02:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:04.423 02:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:04.423 02:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3104483 00:34:04.423 02:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:04.682 [2024-11-17 02:54:13.070225] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:04.682 02:54:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:05.247 Malloc0 00:34:05.247 02:54:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:05.505 02:54:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:05.762 02:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:06.020 [2024-11-17 02:54:14.253251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:06.020 02:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:06.279 [2024-11-17 02:54:14.525900] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:06.279 02:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3104784 00:34:06.279 02:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:06.279 02:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:06.279 02:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3104784 /var/tmp/bdevperf.sock 00:34:06.279 02:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3104784 ']' 00:34:06.279 02:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:06.279 02:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:06.279 02:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:06.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:06.279 02:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:06.279 02:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:07.213 02:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:07.213 02:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:07.213 02:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:07.471 02:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:08.037 Nvme0n1 00:34:08.037 02:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:08.603 Nvme0n1 00:34:08.603 02:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:08.603 02:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:10.503 02:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:10.503 02:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:10.761 02:54:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:11.327 02:54:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:12.260 02:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:12.260 02:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:12.260 02:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:12.260 02:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:12.519 02:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:12.519 02:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:12.519 02:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:12.519 02:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:12.777 02:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:12.777 02:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:12.777 02:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:12.777 02:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:13.035 02:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:13.035 02:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:13.035 02:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:13.035 02:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:13.293 02:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:13.293 02:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:13.293 02:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:13.293 02:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:13.552 02:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:13.552 02:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:13.552 02:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:13.552 02:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:13.809 02:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:13.809 02:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:13.809 02:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:14.067 02:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:14.325 02:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:15.259 02:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:15.259 02:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:15.259 02:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:15.259 02:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:15.825 02:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:15.825 02:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:15.825 02:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:15.825 02:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:15.825 02:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:15.825 02:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:15.825 02:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:15.825 02:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:16.084 02:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:16.084 02:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:16.084 02:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.084 02:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:16.651 02:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:16.651 02:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:16.651 02:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.651 02:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:16.651 02:54:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:16.651 02:54:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:16.651 02:54:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.651 02:54:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:16.909 02:54:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:16.909 02:54:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:16.909 02:54:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:17.168 02:54:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:17.735 02:54:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:18.669 02:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:18.669 02:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:18.669 02:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.669 02:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:18.927 02:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.927 02:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:18.927 02:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.927 02:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:19.184 02:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:19.184 02:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:19.185 02:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.185 02:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:19.442 02:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:19.442 02:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:19.442 02:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.442 02:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:19.701 02:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:19.701 02:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:19.701 02:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.701 02:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:19.960 02:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:19.960 02:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:19.960 02:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.960 02:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:20.219 02:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:20.219 02:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:20.219 02:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:20.477 02:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:20.735 02:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:22.109 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:22.109 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:22.109 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.109 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:22.109 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:22.109 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:22.109 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.109 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:22.367 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:22.367 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:22.367 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.367 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:22.625 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:22.625 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:22.625 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.625 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:22.884 02:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:22.884 02:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:22.884 02:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.884 02:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:23.142 02:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:23.142 02:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:23.142 02:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.142 02:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:23.400 02:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:23.400 02:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:23.400 02:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:23.658 02:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:23.916 02:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:24.998 02:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:24.998 02:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:24.998 02:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.998 02:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:25.256 02:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:25.256 02:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:25.256 02:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.256 02:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:25.514 02:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:25.514 02:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:25.514 02:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.514 02:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:25.772 02:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:25.773 02:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:25.773 02:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.773 02:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:26.029 02:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:26.029 02:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:26.029 02:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.029 02:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:26.288 02:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:26.288 02:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:26.288 02:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.288 02:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:26.546 02:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:26.546 02:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:26.546 02:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:27.115 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:27.115 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:28.490 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:28.490 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:28.490 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.490 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:28.490 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:28.490 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:28.490 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.490 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:28.748 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.748 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:28.748 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.748 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:29.006 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.006 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:29.006 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.006 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:29.265 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.265 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:29.265 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.265 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:29.523 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:29.523 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:29.523 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.523 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:29.781 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.781 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:30.346 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:30.346 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:30.346 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:30.913 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:31.848 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:31.848 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:31.848 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.848 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:32.107 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.107 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:32.107 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.107 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:32.365 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.365 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:32.365 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.365 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:32.623 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.623 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:32.623 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.623 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:32.882 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.882 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:32.882 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.882 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:33.141 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.141 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:33.141 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.141 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:33.399 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.399 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:33.399 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:33.657 02:54:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:33.915 02:54:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:35.292 02:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:35.292 02:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:35.292 02:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.292 02:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:35.292 02:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:35.292 02:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:35.292 02:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.292 02:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:35.549 02:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.549 02:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:35.549 02:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.549 02:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:35.808 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.808 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:35.808 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.808 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:36.066 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.066 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:36.066 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.066 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:36.324 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.324 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:36.324 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.324 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:36.583 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.583 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:36.583 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:36.840 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:37.099 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:38.473 02:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:38.473 02:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:38.473 02:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.473 02:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:38.473 02:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.474 02:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:38.474 02:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.474 02:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:38.732 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.732 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:38.732 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.732 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:38.990 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.990 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:38.990 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.990 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:39.248 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.248 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:39.248 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.248 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:39.506 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.506 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:39.506 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.506 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:39.764 02:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.764 02:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:39.764 02:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:40.022 02:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:40.279 02:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:41.659 02:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:41.659 02:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:41.659 02:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.659 02:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:41.659 02:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.659 02:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:41.659 02:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:41.659 02:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.917 02:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:41.917 02:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:41.917 02:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.917 02:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:42.483 02:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:42.483 02:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:42.483 02:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.483 02:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:42.483 02:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:42.483 02:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:42.483 02:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.483 02:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:43.049 02:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.049 02:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:43.049 02:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.049 02:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:43.049 02:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:43.049 02:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3104784 00:34:43.049 02:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3104784 ']' 00:34:43.049 02:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3104784 00:34:43.049 02:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:43.049 02:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:43.049 02:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3104784 00:34:43.308 02:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:34:43.308 02:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:34:43.308 02:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3104784' 00:34:43.308 killing process with pid 3104784 00:34:43.308 02:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3104784 00:34:43.308 02:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3104784 00:34:43.308 { 00:34:43.308 "results": [ 00:34:43.308 { 00:34:43.308 "job": "Nvme0n1", 00:34:43.308 "core_mask": "0x4", 00:34:43.308 "workload": "verify", 00:34:43.308 "status": "terminated", 00:34:43.308 "verify_range": { 00:34:43.308 "start": 0, 00:34:43.308 "length": 16384 00:34:43.308 }, 00:34:43.308 "queue_depth": 128, 00:34:43.308 "io_size": 4096, 00:34:43.308 "runtime": 34.449471, 00:34:43.308 "iops": 5924.18385756925, 00:34:43.308 "mibps": 23.141343193629883, 00:34:43.308 "io_failed": 0, 00:34:43.308 "io_timeout": 0, 00:34:43.308 "avg_latency_us": 21571.328472715166, 00:34:43.308 "min_latency_us": 855.6088888888889, 00:34:43.308 "max_latency_us": 4026531.84 00:34:43.309 } 00:34:43.309 ], 00:34:43.309 "core_count": 1 00:34:43.309 } 00:34:44.252 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3104784 00:34:44.252 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:44.252 [2024-11-17 02:54:14.629053] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:34:44.252 [2024-11-17 02:54:14.629232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3104784 ] 00:34:44.252 [2024-11-17 02:54:14.765989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:44.252 [2024-11-17 02:54:14.891242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:44.252 Running I/O for 90 seconds... 00:34:44.252 6307.00 IOPS, 24.64 MiB/s [2024-11-17T01:54:52.712Z] 6373.00 IOPS, 24.89 MiB/s [2024-11-17T01:54:52.712Z] 6332.33 IOPS, 24.74 MiB/s [2024-11-17T01:54:52.712Z] 6308.75 IOPS, 24.64 MiB/s [2024-11-17T01:54:52.712Z] 6287.00 IOPS, 24.56 MiB/s [2024-11-17T01:54:52.712Z] 6292.33 IOPS, 24.58 MiB/s [2024-11-17T01:54:52.712Z] 6266.71 IOPS, 24.48 MiB/s [2024-11-17T01:54:52.712Z] 6249.62 IOPS, 24.41 MiB/s [2024-11-17T01:54:52.712Z] 6239.22 IOPS, 24.37 MiB/s [2024-11-17T01:54:52.712Z] 6248.60 IOPS, 24.41 MiB/s [2024-11-17T01:54:52.712Z] 6251.18 IOPS, 24.42 MiB/s [2024-11-17T01:54:52.712Z] 6255.83 IOPS, 24.44 MiB/s [2024-11-17T01:54:52.712Z] 6260.54 IOPS, 24.46 MiB/s [2024-11-17T01:54:52.712Z] 6252.14 IOPS, 24.42 MiB/s [2024-11-17T01:54:52.712Z] [2024-11-17 02:54:32.050827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.252 [2024-11-17 02:54:32.050930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:44.252 [2024-11-17 02:54:32.051051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.252 [2024-11-17 02:54:32.051104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:44.252 [2024-11-17 02:54:32.051172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.252 [2024-11-17 02:54:32.051200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:44.252 [2024-11-17 02:54:32.051253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.252 [2024-11-17 02:54:32.051280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:44.252 [2024-11-17 02:54:32.051317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.252 [2024-11-17 02:54:32.051343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:44.252 [2024-11-17 02:54:32.051401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.252 [2024-11-17 02:54:32.051427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:44.252 [2024-11-17 02:54:32.051468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.252 [2024-11-17 02:54:32.051495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:44.252 [2024-11-17 02:54:32.051546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.252 [2024-11-17 02:54:32.051588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.051794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.051826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.051887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.253 [2024-11-17 02:54:32.051930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.051973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.253 [2024-11-17 02:54:32.052000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.052053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.253 [2024-11-17 02:54:32.052079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.052143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.052169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.052204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.052230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.052267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.052292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.052329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.052354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.052406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.052433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.052471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.052498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.052536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.052563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.052601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.052629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.052667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.052708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.052760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.052791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.052829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.052855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.052891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.052917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.052953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.052978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.053014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.053038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.053088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.053122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.053178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.053206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.053243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.053269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.053307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.053333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.053371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.053412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.053449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.053490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.053527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.053551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.053586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.053611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.053651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.053676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.053818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.253 [2024-11-17 02:54:32.053848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.053891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.253 [2024-11-17 02:54:32.053916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.053954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.253 [2024-11-17 02:54:32.053978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.054015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.253 [2024-11-17 02:54:32.054040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.054075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.253 [2024-11-17 02:54:32.054127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:44.253 [2024-11-17 02:54:32.054168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.254 [2024-11-17 02:54:32.054194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.054232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.254 [2024-11-17 02:54:32.054257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.054294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.254 [2024-11-17 02:54:32.054319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.054356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.054382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.054419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.054459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.054496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.054521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.054563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.054588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.054626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.054650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.054687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.054711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.054748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.054772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.055045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.055075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.055131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.055159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.055201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.055227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.055267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.055309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.055349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.055390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.055429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.055469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.055509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.055552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.055596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.055623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.055664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.055695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.055737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.055763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.055804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.055846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.055902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.055927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.055982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.056009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.056049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.056092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.056144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.056171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.056212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.056238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.056293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.056319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.056358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.056399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.056438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.056463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.056501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.056526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.056562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.056591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.056630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.254 [2024-11-17 02:54:32.056656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:44.254 [2024-11-17 02:54:32.056693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.255 [2024-11-17 02:54:32.056719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.056757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.255 [2024-11-17 02:54:32.056782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.056819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.255 [2024-11-17 02:54:32.056844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.056881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.255 [2024-11-17 02:54:32.056906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.056943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.255 [2024-11-17 02:54:32.056968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.057006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.255 [2024-11-17 02:54:32.057031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.057068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.255 [2024-11-17 02:54:32.057118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.057161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.255 [2024-11-17 02:54:32.057189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.057229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.255 [2024-11-17 02:54:32.057255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.057294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.057321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.057388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.057414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.057475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.057501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.057538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.057563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.057601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.057626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.057663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.057688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.057724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.057749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.057786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.057810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.057848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.057873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.057910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.057935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.057971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.057996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.058033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.058058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.058120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.058163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.058227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.058255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.058300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.058327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.058367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.058393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.058447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.058488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.058526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.058553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.058591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.058616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.058654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.058680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.059004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.059036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.059086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.059124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.059171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.059197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.059257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.059283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.059325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.059351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.059409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.255 [2024-11-17 02:54:32.059435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:44.255 [2024-11-17 02:54:32.059480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.256 [2024-11-17 02:54:32.059506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:32.059547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.256 [2024-11-17 02:54:32.059574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:32.059614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.256 [2024-11-17 02:54:32.059639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:32.059679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.256 [2024-11-17 02:54:32.059704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:32.059744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.256 [2024-11-17 02:54:32.059769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:32.059809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.256 [2024-11-17 02:54:32.059834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:32.059875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.256 [2024-11-17 02:54:32.059900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:32.059940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.256 [2024-11-17 02:54:32.059965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:32.060005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.256 [2024-11-17 02:54:32.060030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:32.060071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.256 [2024-11-17 02:54:32.060130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:32.060175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.256 [2024-11-17 02:54:32.060201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:32.060243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.256 [2024-11-17 02:54:32.060269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:32.060312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.256 [2024-11-17 02:54:32.060346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:32.060404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.256 [2024-11-17 02:54:32.060431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:32.060472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.256 [2024-11-17 02:54:32.060497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:32.060538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.256 [2024-11-17 02:54:32.060563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:32.060603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.256 [2024-11-17 02:54:32.060627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:32.060668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.256 [2024-11-17 02:54:32.060692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:32.060732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.256 [2024-11-17 02:54:32.060757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:32.060797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.256 [2024-11-17 02:54:32.060822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:32.060864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.256 [2024-11-17 02:54:32.060901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:44.256 6248.00 IOPS, 24.41 MiB/s [2024-11-17T01:54:52.716Z] 5857.50 IOPS, 22.88 MiB/s [2024-11-17T01:54:52.716Z] 5512.94 IOPS, 21.53 MiB/s [2024-11-17T01:54:52.716Z] 5206.67 IOPS, 20.34 MiB/s [2024-11-17T01:54:52.716Z] 4934.47 IOPS, 19.28 MiB/s [2024-11-17T01:54:52.716Z] 5002.10 IOPS, 19.54 MiB/s [2024-11-17T01:54:52.716Z] 5058.52 IOPS, 19.76 MiB/s [2024-11-17T01:54:52.716Z] 5133.68 IOPS, 20.05 MiB/s [2024-11-17T01:54:52.716Z] 5286.65 IOPS, 20.65 MiB/s [2024-11-17T01:54:52.716Z] 5421.17 IOPS, 21.18 MiB/s [2024-11-17T01:54:52.716Z] 5541.00 IOPS, 21.64 MiB/s [2024-11-17T01:54:52.716Z] 5561.73 IOPS, 21.73 MiB/s [2024-11-17T01:54:52.716Z] 5589.67 IOPS, 21.83 MiB/s [2024-11-17T01:54:52.716Z] 5607.61 IOPS, 21.90 MiB/s [2024-11-17T01:54:52.716Z] 5676.21 IOPS, 22.17 MiB/s [2024-11-17T01:54:52.716Z] 5774.97 IOPS, 22.56 MiB/s [2024-11-17T01:54:52.716Z] 5859.45 IOPS, 22.89 MiB/s [2024-11-17T01:54:52.716Z] [2024-11-17 02:54:48.721492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.256 [2024-11-17 02:54:48.721572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:48.721628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:52592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.256 [2024-11-17 02:54:48.721656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:48.721708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.256 [2024-11-17 02:54:48.721737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:48.721791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.256 [2024-11-17 02:54:48.721818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:48.721855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:52640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.256 [2024-11-17 02:54:48.721882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:48.721921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.256 [2024-11-17 02:54:48.721948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:48.721986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.256 [2024-11-17 02:54:48.722012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:48.722048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.256 [2024-11-17 02:54:48.722074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:48.722163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:52704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.256 [2024-11-17 02:54:48.722190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:48.722227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:52720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.256 [2024-11-17 02:54:48.722253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:48.722290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.256 [2024-11-17 02:54:48.722317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:48.722354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.256 [2024-11-17 02:54:48.722379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:44.256 [2024-11-17 02:54:48.722420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.722462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.722499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.722524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.722564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:52800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.722590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.722626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.257 [2024-11-17 02:54:48.722667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.722705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:52296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.257 [2024-11-17 02:54:48.722731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.722768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:52328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.257 [2024-11-17 02:54:48.722794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.722830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.257 [2024-11-17 02:54:48.722856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.722892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:52392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.257 [2024-11-17 02:54:48.722919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.722955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.257 [2024-11-17 02:54:48.722981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.723018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.723044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.723081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.723117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.723167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.723193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.723230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.723256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.723293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.723319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.723355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.723401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.723440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.723481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.723529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.723554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.723590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.723615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.723650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.723675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.723721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.723746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.723782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.723807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.723843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.723868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.723915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.723941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.725772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.725808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.725852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.725879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.725916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.725942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.725978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.726008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.726047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:53112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.726088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:44.257 [2024-11-17 02:54:48.726149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.257 [2024-11-17 02:54:48.726176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.726213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.258 [2024-11-17 02:54:48.726239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.726276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.258 [2024-11-17 02:54:48.726301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.726339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.258 [2024-11-17 02:54:48.726364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.726425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.258 [2024-11-17 02:54:48.726451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.726486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.258 [2024-11-17 02:54:48.726511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.726563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.258 [2024-11-17 02:54:48.726589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.726647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.726674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.726711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.726738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.726775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:52336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.726801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.726838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.726864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.726907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.726934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.726971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.726998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.727034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.258 [2024-11-17 02:54:48.727077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.727149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.258 [2024-11-17 02:54:48.727177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.727214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.258 [2024-11-17 02:54:48.727240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.727278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.727304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.727341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:52496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.727366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.727410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.727451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.727498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.727523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.728685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.728720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.728764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.728790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.728826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.728851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.728894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:52552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.728920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.728955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:52584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.728981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.729016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.729042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.729103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.729131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.729177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.729203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.729239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.729265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.729301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.729327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.729363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:52776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.729396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.729432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:52808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.729473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.729509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.729534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.729587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:52864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.729614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.729652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:52896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.729678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:44.258 [2024-11-17 02:54:48.729714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.258 [2024-11-17 02:54:48.729745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.729784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.259 [2024-11-17 02:54:48.729811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.729847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.259 [2024-11-17 02:54:48.729873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.729909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.259 [2024-11-17 02:54:48.729935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.729971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.259 [2024-11-17 02:54:48.730012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.730049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.259 [2024-11-17 02:54:48.730091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.730149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.259 [2024-11-17 02:54:48.730176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.730212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.259 [2024-11-17 02:54:48.730238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.730274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.259 [2024-11-17 02:54:48.730299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.730335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:52328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.259 [2024-11-17 02:54:48.730362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.730408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.259 [2024-11-17 02:54:48.730433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.730480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.259 [2024-11-17 02:54:48.730506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.730542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.259 [2024-11-17 02:54:48.730573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.730610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.259 [2024-11-17 02:54:48.730637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.730673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.259 [2024-11-17 02:54:48.730716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.730752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.259 [2024-11-17 02:54:48.730777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.730813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.259 [2024-11-17 02:54:48.730838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.730875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.259 [2024-11-17 02:54:48.730900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.731607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.259 [2024-11-17 02:54:48.731642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.731696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.259 [2024-11-17 02:54:48.731724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.731772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.259 [2024-11-17 02:54:48.731797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.731833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.259 [2024-11-17 02:54:48.731859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.731897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.259 [2024-11-17 02:54:48.731923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.731959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.259 [2024-11-17 02:54:48.731985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.732022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.259 [2024-11-17 02:54:48.732048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.732090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.259 [2024-11-17 02:54:48.732125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.732171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.259 [2024-11-17 02:54:48.732197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.732233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.259 [2024-11-17 02:54:48.732258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.732295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.259 [2024-11-17 02:54:48.732320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.732357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.259 [2024-11-17 02:54:48.732382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.732445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.259 [2024-11-17 02:54:48.732470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.732522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.259 [2024-11-17 02:54:48.732548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.732584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.259 [2024-11-17 02:54:48.732610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.732646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:53096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.259 [2024-11-17 02:54:48.732672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.732707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.259 [2024-11-17 02:54:48.732733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.732769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.259 [2024-11-17 02:54:48.732795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:44.259 [2024-11-17 02:54:48.732856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.260 [2024-11-17 02:54:48.732882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.732924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.260 [2024-11-17 02:54:48.732951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.732988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.260 [2024-11-17 02:54:48.733014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.733051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.260 [2024-11-17 02:54:48.733077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.733132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.260 [2024-11-17 02:54:48.733161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.733197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.260 [2024-11-17 02:54:48.733223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.733259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:52464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.260 [2024-11-17 02:54:48.733284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.733321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.260 [2024-11-17 02:54:48.733348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.735653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.260 [2024-11-17 02:54:48.735689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.735733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.260 [2024-11-17 02:54:48.735761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.735799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.260 [2024-11-17 02:54:48.735825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.735862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.260 [2024-11-17 02:54:48.735889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.735926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.260 [2024-11-17 02:54:48.735953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.735989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.260 [2024-11-17 02:54:48.736021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.736060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:52552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.260 [2024-11-17 02:54:48.736087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.736136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.260 [2024-11-17 02:54:48.736172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.736214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:52680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.260 [2024-11-17 02:54:48.736242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.736283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.260 [2024-11-17 02:54:48.736311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.736349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.260 [2024-11-17 02:54:48.736374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.736416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.260 [2024-11-17 02:54:48.736453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.736490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.260 [2024-11-17 02:54:48.736515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.736552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.260 [2024-11-17 02:54:48.736578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.736615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.260 [2024-11-17 02:54:48.736641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.736677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.260 [2024-11-17 02:54:48.736703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.736740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.260 [2024-11-17 02:54:48.736781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.736828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.260 [2024-11-17 02:54:48.736859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.736896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.260 [2024-11-17 02:54:48.736921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.736956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.260 [2024-11-17 02:54:48.736981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.737017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.260 [2024-11-17 02:54:48.737042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.737093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.260 [2024-11-17 02:54:48.737131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.737178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:52960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.260 [2024-11-17 02:54:48.737206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.737242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.260 [2024-11-17 02:54:48.737268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.737304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.260 [2024-11-17 02:54:48.737330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.737367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.260 [2024-11-17 02:54:48.737400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:44.260 [2024-11-17 02:54:48.737436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.261 [2024-11-17 02:54:48.737475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.737515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.261 [2024-11-17 02:54:48.737545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.737586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.261 [2024-11-17 02:54:48.737613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.737649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.261 [2024-11-17 02:54:48.737675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.737717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.261 [2024-11-17 02:54:48.737745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.737781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.261 [2024-11-17 02:54:48.737808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.737844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.261 [2024-11-17 02:54:48.737870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.737907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.261 [2024-11-17 02:54:48.737937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.741433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.261 [2024-11-17 02:54:48.741472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.741530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.261 [2024-11-17 02:54:48.741560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.741599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.261 [2024-11-17 02:54:48.741626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.741664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:52800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.261 [2024-11-17 02:54:48.741692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.741737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.261 [2024-11-17 02:54:48.741768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.741808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.261 [2024-11-17 02:54:48.741835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.741873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.261 [2024-11-17 02:54:48.741899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.741945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.261 [2024-11-17 02:54:48.741972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.742031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.261 [2024-11-17 02:54:48.742058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.742123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.261 [2024-11-17 02:54:48.742161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.742199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.261 [2024-11-17 02:54:48.742225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.742261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.261 [2024-11-17 02:54:48.742287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.742324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.261 [2024-11-17 02:54:48.742350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.742412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.261 [2024-11-17 02:54:48.742438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.742493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.261 [2024-11-17 02:54:48.742519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.742556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.261 [2024-11-17 02:54:48.742583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.742619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.261 [2024-11-17 02:54:48.742645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.742683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.261 [2024-11-17 02:54:48.742710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.742761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.261 [2024-11-17 02:54:48.742787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.742823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.261 [2024-11-17 02:54:48.742849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.742885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.261 [2024-11-17 02:54:48.742933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.742973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.261 [2024-11-17 02:54:48.743000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.743055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.261 [2024-11-17 02:54:48.743082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.743129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:52864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.261 [2024-11-17 02:54:48.743162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.743208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.261 [2024-11-17 02:54:48.743235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.743271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.261 [2024-11-17 02:54:48.743297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.743334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.261 [2024-11-17 02:54:48.743360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:44.261 [2024-11-17 02:54:48.743412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.261 [2024-11-17 02:54:48.743439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.743500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.262 [2024-11-17 02:54:48.743524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.743559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.262 [2024-11-17 02:54:48.743583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.743617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.262 [2024-11-17 02:54:48.743642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.743677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.262 [2024-11-17 02:54:48.743701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.743735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:53160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.262 [2024-11-17 02:54:48.743764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.743801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.262 [2024-11-17 02:54:48.743826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.743860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.262 [2024-11-17 02:54:48.743885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.743919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.262 [2024-11-17 02:54:48.743944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.743978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.262 [2024-11-17 02:54:48.744003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.744038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.262 [2024-11-17 02:54:48.744063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.745597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.262 [2024-11-17 02:54:48.745631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.745674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.262 [2024-11-17 02:54:48.745700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.745736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.262 [2024-11-17 02:54:48.745776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.745815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.262 [2024-11-17 02:54:48.745851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.745887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.262 [2024-11-17 02:54:48.745914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.745951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.262 [2024-11-17 02:54:48.745977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.746014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.262 [2024-11-17 02:54:48.746041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.746108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.262 [2024-11-17 02:54:48.746162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.746201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.262 [2024-11-17 02:54:48.746227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.746264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.262 [2024-11-17 02:54:48.746290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.746327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.262 [2024-11-17 02:54:48.746353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.747226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.262 [2024-11-17 02:54:48.747261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.747305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:52784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.262 [2024-11-17 02:54:48.747332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.747369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.262 [2024-11-17 02:54:48.747406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:44.262 [2024-11-17 02:54:48.747442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.263 [2024-11-17 02:54:48.747468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.747504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.263 [2024-11-17 02:54:48.747531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.747567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.263 [2024-11-17 02:54:48.747593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.747630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.263 [2024-11-17 02:54:48.747657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.747693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.263 [2024-11-17 02:54:48.747735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.747778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.263 [2024-11-17 02:54:48.747804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.747841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:52672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.263 [2024-11-17 02:54:48.747866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.747902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.263 [2024-11-17 02:54:48.747927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.747973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.263 [2024-11-17 02:54:48.747998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.748034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.263 [2024-11-17 02:54:48.748059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.748130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.263 [2024-11-17 02:54:48.748157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.748194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.263 [2024-11-17 02:54:48.748219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.748256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.263 [2024-11-17 02:54:48.748281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.748317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.263 [2024-11-17 02:54:48.748343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.748380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.263 [2024-11-17 02:54:48.748405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.748449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.263 [2024-11-17 02:54:48.748475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.748510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.263 [2024-11-17 02:54:48.748536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.748573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.263 [2024-11-17 02:54:48.748603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.748641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.263 [2024-11-17 02:54:48.748667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.748714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.263 [2024-11-17 02:54:48.748740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.748788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.263 [2024-11-17 02:54:48.748814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.748861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.263 [2024-11-17 02:54:48.748887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.748924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.263 [2024-11-17 02:54:48.748950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.748987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.263 [2024-11-17 02:54:48.749013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.749065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.263 [2024-11-17 02:54:48.749118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.750506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.263 [2024-11-17 02:54:48.750540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.750582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.263 [2024-11-17 02:54:48.750608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.750644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.263 [2024-11-17 02:54:48.750669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.750703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.263 [2024-11-17 02:54:48.750745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.750783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.263 [2024-11-17 02:54:48.750814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.750853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.263 [2024-11-17 02:54:48.750880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.750916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.263 [2024-11-17 02:54:48.750942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.750977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.263 [2024-11-17 02:54:48.751004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:44.263 [2024-11-17 02:54:48.751040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.263 [2024-11-17 02:54:48.751066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.751129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.264 [2024-11-17 02:54:48.751165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.752736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.264 [2024-11-17 02:54:48.752770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.752818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.264 [2024-11-17 02:54:48.752846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.752883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.264 [2024-11-17 02:54:48.752909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.752945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.264 [2024-11-17 02:54:48.752972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.753009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.264 [2024-11-17 02:54:48.753060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.753122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.264 [2024-11-17 02:54:48.753160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.753199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.264 [2024-11-17 02:54:48.753231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.753270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.264 [2024-11-17 02:54:48.753296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.753333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.264 [2024-11-17 02:54:48.753359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.753422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.264 [2024-11-17 02:54:48.753463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.753499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:52672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.264 [2024-11-17 02:54:48.753524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.753558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.264 [2024-11-17 02:54:48.753584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.753619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.264 [2024-11-17 02:54:48.753643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.753678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.264 [2024-11-17 02:54:48.753702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.753738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.264 [2024-11-17 02:54:48.753762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.753797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.264 [2024-11-17 02:54:48.753821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.753865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.264 [2024-11-17 02:54:48.753890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.753925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.264 [2024-11-17 02:54:48.753950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.753985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.264 [2024-11-17 02:54:48.754009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.754049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.264 [2024-11-17 02:54:48.754090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.754163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.264 [2024-11-17 02:54:48.754190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.754228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.264 [2024-11-17 02:54:48.754254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.754291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.264 [2024-11-17 02:54:48.754317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.754353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.264 [2024-11-17 02:54:48.754395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.754431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.264 [2024-11-17 02:54:48.754472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.754507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.264 [2024-11-17 02:54:48.754532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.754566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.264 [2024-11-17 02:54:48.754590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.754626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.264 [2024-11-17 02:54:48.754650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.754685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.264 [2024-11-17 02:54:48.754709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.754744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.264 [2024-11-17 02:54:48.754768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.754802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.264 [2024-11-17 02:54:48.754827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.754867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.264 [2024-11-17 02:54:48.754892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:44.264 [2024-11-17 02:54:48.757816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.264 [2024-11-17 02:54:48.757853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.757899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.265 [2024-11-17 02:54:48.757928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.757965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.265 [2024-11-17 02:54:48.757991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.758028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.265 [2024-11-17 02:54:48.758055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.758093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.265 [2024-11-17 02:54:48.758152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.758189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.265 [2024-11-17 02:54:48.758214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.758249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.265 [2024-11-17 02:54:48.758274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.758309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.265 [2024-11-17 02:54:48.758334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.758397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.265 [2024-11-17 02:54:48.758423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.758461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.265 [2024-11-17 02:54:48.758487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.758524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.265 [2024-11-17 02:54:48.758550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.758587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.265 [2024-11-17 02:54:48.758618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.758657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.265 [2024-11-17 02:54:48.758691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.758727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.265 [2024-11-17 02:54:48.758754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.758790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.265 [2024-11-17 02:54:48.758817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.758854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.265 [2024-11-17 02:54:48.758880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.758933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.265 [2024-11-17 02:54:48.758958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.758995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.265 [2024-11-17 02:54:48.759021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.759058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.265 [2024-11-17 02:54:48.759107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.759161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.265 [2024-11-17 02:54:48.759187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.759223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.265 [2024-11-17 02:54:48.759249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.759285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.265 [2024-11-17 02:54:48.759311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.759347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.265 [2024-11-17 02:54:48.759388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.759451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.265 [2024-11-17 02:54:48.759481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.759517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.265 [2024-11-17 02:54:48.759541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.759576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.265 [2024-11-17 02:54:48.759600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.759634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.265 [2024-11-17 02:54:48.759659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.759694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.265 [2024-11-17 02:54:48.759718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.759753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.265 [2024-11-17 02:54:48.759777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.759811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.265 [2024-11-17 02:54:48.759837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.759872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.265 [2024-11-17 02:54:48.759896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.759982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.265 [2024-11-17 02:54:48.760008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.760045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.265 [2024-11-17 02:54:48.760070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:44.265 [2024-11-17 02:54:48.763145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.266 [2024-11-17 02:54:48.763192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.763237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.266 [2024-11-17 02:54:48.763265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.763302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.266 [2024-11-17 02:54:48.763328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.763370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.266 [2024-11-17 02:54:48.763398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.763436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.266 [2024-11-17 02:54:48.763470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.763506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.266 [2024-11-17 02:54:48.763549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.763587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.266 [2024-11-17 02:54:48.763613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.763650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.266 [2024-11-17 02:54:48.763675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.763712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.266 [2024-11-17 02:54:48.763738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.763774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.266 [2024-11-17 02:54:48.763800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.763835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.266 [2024-11-17 02:54:48.763861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.763896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.266 [2024-11-17 02:54:48.763922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.763958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.266 [2024-11-17 02:54:48.763984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.764019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.266 [2024-11-17 02:54:48.764045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.764081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.266 [2024-11-17 02:54:48.764133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.764185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.266 [2024-11-17 02:54:48.764212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.764250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.266 [2024-11-17 02:54:48.764276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.764312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.266 [2024-11-17 02:54:48.764338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.764375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.266 [2024-11-17 02:54:48.764410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.764447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.266 [2024-11-17 02:54:48.764484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.764521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.266 [2024-11-17 02:54:48.764547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.764600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.266 [2024-11-17 02:54:48.764626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.764679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.266 [2024-11-17 02:54:48.764706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.764743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.266 [2024-11-17 02:54:48.764770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.764806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.266 [2024-11-17 02:54:48.764833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.764869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.266 [2024-11-17 02:54:48.764896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.764932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.266 [2024-11-17 02:54:48.764973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.765010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.266 [2024-11-17 02:54:48.765041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.765094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.266 [2024-11-17 02:54:48.765157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.765195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.266 [2024-11-17 02:54:48.765220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.765258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.266 [2024-11-17 02:54:48.765285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.766277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.266 [2024-11-17 02:54:48.766309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.766351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.266 [2024-11-17 02:54:48.766378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.766425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.266 [2024-11-17 02:54:48.766466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:44.266 [2024-11-17 02:54:48.766503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.267 [2024-11-17 02:54:48.766529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.766564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.267 [2024-11-17 02:54:48.766589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.766623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.267 [2024-11-17 02:54:48.766648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.766683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.267 [2024-11-17 02:54:48.766708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.766742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.267 [2024-11-17 02:54:48.766767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.766818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.267 [2024-11-17 02:54:48.766866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.766905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.267 [2024-11-17 02:54:48.766932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.766969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.267 [2024-11-17 02:54:48.766996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.768856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.267 [2024-11-17 02:54:48.768892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.768990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.267 [2024-11-17 02:54:48.769021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.769060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.267 [2024-11-17 02:54:48.769088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.769136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.267 [2024-11-17 02:54:48.769173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.769208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.267 [2024-11-17 02:54:48.769249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.769286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.267 [2024-11-17 02:54:48.769312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.769347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.267 [2024-11-17 02:54:48.769372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.769431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.267 [2024-11-17 02:54:48.769465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.769500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.267 [2024-11-17 02:54:48.769525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.769560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.267 [2024-11-17 02:54:48.769584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.769624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.267 [2024-11-17 02:54:48.769651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.769686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.267 [2024-11-17 02:54:48.769711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.769746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.267 [2024-11-17 02:54:48.769771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.769807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.267 [2024-11-17 02:54:48.769832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.769867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.267 [2024-11-17 02:54:48.769892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.769927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.267 [2024-11-17 02:54:48.769952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.769986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.267 [2024-11-17 02:54:48.770011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.770045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.267 [2024-11-17 02:54:48.770070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.770161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.267 [2024-11-17 02:54:48.770192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.770235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.267 [2024-11-17 02:54:48.770262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.770315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.267 [2024-11-17 02:54:48.770341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.770378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.267 [2024-11-17 02:54:48.770414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:44.267 [2024-11-17 02:54:48.770456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.267 [2024-11-17 02:54:48.770484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.770535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.268 [2024-11-17 02:54:48.770560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.770594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:52752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.268 [2024-11-17 02:54:48.770618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.770653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.268 [2024-11-17 02:54:48.770677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.770712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.268 [2024-11-17 02:54:48.770737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.770773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.268 [2024-11-17 02:54:48.770798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.770832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.268 [2024-11-17 02:54:48.770856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.770890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.268 [2024-11-17 02:54:48.770915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.770951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.268 [2024-11-17 02:54:48.770976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.771012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.268 [2024-11-17 02:54:48.771037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.774280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.268 [2024-11-17 02:54:48.774317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.774363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.268 [2024-11-17 02:54:48.774390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.774439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.268 [2024-11-17 02:54:48.774471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.774511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.268 [2024-11-17 02:54:48.774553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.774606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.268 [2024-11-17 02:54:48.774632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.774668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.268 [2024-11-17 02:54:48.774693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.774727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.268 [2024-11-17 02:54:48.774752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.774788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.268 [2024-11-17 02:54:48.774812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.774847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.268 [2024-11-17 02:54:48.774872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.774906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.268 [2024-11-17 02:54:48.774930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.774965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:54000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.268 [2024-11-17 02:54:48.774990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.775024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.268 [2024-11-17 02:54:48.775049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.775108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:54064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.268 [2024-11-17 02:54:48.775163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.775202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.268 [2024-11-17 02:54:48.775228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.775266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.268 [2024-11-17 02:54:48.775297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.775336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.268 [2024-11-17 02:54:48.775363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.775409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.268 [2024-11-17 02:54:48.775435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:44.268 [2024-11-17 02:54:48.775473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.268 [2024-11-17 02:54:48.775499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.775537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.269 [2024-11-17 02:54:48.775563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.775599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.269 [2024-11-17 02:54:48.775640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.775679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.269 [2024-11-17 02:54:48.775704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.775741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.269 [2024-11-17 02:54:48.775780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.775816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.269 [2024-11-17 02:54:48.775841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.775876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.269 [2024-11-17 02:54:48.775900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.775935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.269 [2024-11-17 02:54:48.775959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.775995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.269 [2024-11-17 02:54:48.776020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.776055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.269 [2024-11-17 02:54:48.776102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.776157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.269 [2024-11-17 02:54:48.776183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.776219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.269 [2024-11-17 02:54:48.776244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.776278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:54088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.269 [2024-11-17 02:54:48.776303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.776338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.269 [2024-11-17 02:54:48.776363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.776399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.269 [2024-11-17 02:54:48.776428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.776478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.269 [2024-11-17 02:54:48.776504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.776538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.269 [2024-11-17 02:54:48.776564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.776599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.269 [2024-11-17 02:54:48.776623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.776657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.269 [2024-11-17 02:54:48.776682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.776717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.269 [2024-11-17 02:54:48.776741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.776775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:54168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.269 [2024-11-17 02:54:48.776801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.776835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.269 [2024-11-17 02:54:48.776860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.776899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.269 [2024-11-17 02:54:48.776924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.776958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.269 [2024-11-17 02:54:48.776982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.777016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.269 [2024-11-17 02:54:48.777042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.777076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.269 [2024-11-17 02:54:48.777125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.777172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:54248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.269 [2024-11-17 02:54:48.777197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.780250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.269 [2024-11-17 02:54:48.780288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.780365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.269 [2024-11-17 02:54:48.780394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.780459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.269 [2024-11-17 02:54:48.780485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.780531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:44.269 [2024-11-17 02:54:48.780572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.780617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:54264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.269 [2024-11-17 02:54:48.780642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:44.269 [2024-11-17 02:54:48.780678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.269 [2024-11-17 02:54:48.780702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:44.269 5909.94 IOPS, 23.09 MiB/s [2024-11-17T01:54:52.729Z] 5914.70 IOPS, 23.10 MiB/s [2024-11-17T01:54:52.729Z] 5922.88 IOPS, 23.14 MiB/s [2024-11-17T01:54:52.729Z] Received shutdown signal, test time was about 34.450372 seconds 00:34:44.269 00:34:44.269 Latency(us) 00:34:44.269 [2024-11-17T01:54:52.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.269 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:44.269 Verification LBA range: start 0x0 length 0x4000 00:34:44.270 Nvme0n1 : 34.45 5924.18 23.14 0.00 0.00 21571.33 855.61 4026531.84 00:34:44.270 [2024-11-17T01:54:52.730Z] =================================================================================================================== 00:34:44.270 [2024-11-17T01:54:52.730Z] Total : 5924.18 23.14 0.00 0.00 21571.33 855.61 4026531.84 00:34:44.270 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:44.270 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:44.270 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:44.270 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:44.270 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:44.270 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:34:44.270 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:44.270 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:34:44.270 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:44.270 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:44.270 rmmod nvme_tcp 00:34:44.270 rmmod nvme_fabrics 00:34:44.270 rmmod nvme_keyring 00:34:44.270 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:44.270 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:34:44.270 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:34:44.270 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3104483 ']' 00:34:44.270 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3104483 00:34:44.270 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3104483 ']' 00:34:44.270 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3104483 00:34:44.270 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:44.528 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:44.528 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3104483 00:34:44.528 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:44.528 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:44.528 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3104483' 00:34:44.528 killing process with pid 3104483 00:34:44.528 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3104483 00:34:44.528 02:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3104483 00:34:45.902 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:45.902 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:45.902 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:45.902 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:34:45.902 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:34:45.902 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:45.902 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:34:45.902 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:45.902 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:45.902 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.902 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:45.902 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.809 02:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:47.809 00:34:47.809 real 0m46.545s 00:34:47.809 user 2m20.587s 00:34:47.809 sys 0m10.412s 00:34:47.809 02:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:47.809 02:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:47.809 ************************************ 00:34:47.809 END TEST nvmf_host_multipath_status 00:34:47.809 ************************************ 00:34:47.809 02:54:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:47.809 02:54:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:47.809 02:54:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:47.809 02:54:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.809 ************************************ 00:34:47.809 START TEST nvmf_discovery_remove_ifc 00:34:47.809 ************************************ 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:47.809 * Looking for test storage... 00:34:47.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:47.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.809 --rc genhtml_branch_coverage=1 00:34:47.809 --rc genhtml_function_coverage=1 00:34:47.809 --rc genhtml_legend=1 00:34:47.809 --rc geninfo_all_blocks=1 00:34:47.809 --rc geninfo_unexecuted_blocks=1 00:34:47.809 00:34:47.809 ' 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:47.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.809 --rc genhtml_branch_coverage=1 00:34:47.809 --rc genhtml_function_coverage=1 00:34:47.809 --rc genhtml_legend=1 00:34:47.809 --rc geninfo_all_blocks=1 00:34:47.809 --rc geninfo_unexecuted_blocks=1 00:34:47.809 00:34:47.809 ' 00:34:47.809 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:47.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.810 --rc genhtml_branch_coverage=1 00:34:47.810 --rc genhtml_function_coverage=1 00:34:47.810 --rc genhtml_legend=1 00:34:47.810 --rc geninfo_all_blocks=1 00:34:47.810 --rc geninfo_unexecuted_blocks=1 00:34:47.810 00:34:47.810 ' 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:47.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.810 --rc genhtml_branch_coverage=1 00:34:47.810 --rc genhtml_function_coverage=1 00:34:47.810 --rc genhtml_legend=1 00:34:47.810 --rc geninfo_all_blocks=1 00:34:47.810 --rc geninfo_unexecuted_blocks=1 00:34:47.810 00:34:47.810 ' 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:47.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:34:47.810 02:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:50.343 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:50.343 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.343 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:50.344 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:50.344 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:50.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:50.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:34:50.344 00:34:50.344 --- 10.0.0.2 ping statistics --- 00:34:50.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.344 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:50.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:50.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:34:50.344 00:34:50.344 --- 10.0.0.1 ping statistics --- 00:34:50.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.344 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3111513 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3111513 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3111513 ']' 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:50.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:50.344 02:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:50.344 [2024-11-17 02:54:58.530829] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:34:50.344 [2024-11-17 02:54:58.530976] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:50.344 [2024-11-17 02:54:58.685450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.602 [2024-11-17 02:54:58.827315] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:50.602 [2024-11-17 02:54:58.827406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:50.602 [2024-11-17 02:54:58.827429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:50.602 [2024-11-17 02:54:58.827466] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:50.602 [2024-11-17 02:54:58.827482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:50.602 [2024-11-17 02:54:58.829040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:51.168 02:54:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:51.168 02:54:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:51.168 02:54:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:51.168 02:54:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:51.168 02:54:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:51.168 02:54:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:51.168 02:54:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:51.168 02:54:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.168 02:54:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:51.168 [2024-11-17 02:54:59.506691] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:51.168 [2024-11-17 02:54:59.515025] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:51.168 null0 00:34:51.168 [2024-11-17 02:54:59.546878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:51.169 02:54:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.169 02:54:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3111661 00:34:51.169 02:54:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:51.169 02:54:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3111661 /tmp/host.sock 00:34:51.169 02:54:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3111661 ']' 00:34:51.169 02:54:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:51.169 02:54:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:51.169 02:54:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:51.169 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:51.169 02:54:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:51.169 02:54:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:51.427 [2024-11-17 02:54:59.660328] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:34:51.427 [2024-11-17 02:54:59.660518] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3111661 ] 00:34:51.427 [2024-11-17 02:54:59.802776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:51.684 [2024-11-17 02:54:59.939402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:52.250 02:55:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:52.250 02:55:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:52.250 02:55:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:52.250 02:55:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:52.250 02:55:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.250 02:55:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:52.250 02:55:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.250 02:55:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:52.250 02:55:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.250 02:55:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:52.817 02:55:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.817 02:55:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:52.817 02:55:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.817 02:55:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:53.750 [2024-11-17 02:55:02.040279] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:53.750 [2024-11-17 02:55:02.040334] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:53.750 [2024-11-17 02:55:02.040380] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:53.750 [2024-11-17 02:55:02.126672] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:54.009 [2024-11-17 02:55:02.228104] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:54.009 [2024-11-17 02:55:02.229834] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2c80:1 started. 00:34:54.009 [2024-11-17 02:55:02.231998] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:54.009 [2024-11-17 02:55:02.232093] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:54.009 [2024-11-17 02:55:02.232194] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:54.009 [2024-11-17 02:55:02.232231] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:54.009 [2024-11-17 02:55:02.232288] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:54.009 [2024-11-17 02:55:02.237713] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2c80 was disconnected and freed. delete nvme_qpair. 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:54.009 02:55:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:54.942 02:55:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:54.942 02:55:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:54.943 02:55:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:54.943 02:55:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.943 02:55:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:54.943 02:55:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:54.943 02:55:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:54.943 02:55:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.200 02:55:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:55.200 02:55:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:56.213 02:55:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:56.213 02:55:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:56.213 02:55:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:56.213 02:55:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.213 02:55:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:56.213 02:55:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:56.213 02:55:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:56.213 02:55:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.213 02:55:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:56.213 02:55:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:57.146 02:55:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:57.146 02:55:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:57.146 02:55:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.147 02:55:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:57.147 02:55:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:57.147 02:55:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:57.147 02:55:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:57.147 02:55:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.147 02:55:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:57.147 02:55:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:58.079 02:55:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:58.079 02:55:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:58.079 02:55:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:58.079 02:55:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.079 02:55:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:58.079 02:55:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:58.080 02:55:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:58.080 02:55:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.337 02:55:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:58.337 02:55:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:59.271 02:55:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:59.271 02:55:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:59.271 02:55:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.271 02:55:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:59.271 02:55:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.271 02:55:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:59.271 02:55:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:59.271 02:55:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.271 02:55:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:59.271 02:55:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:59.271 [2024-11-17 02:55:07.673465] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:59.271 [2024-11-17 02:55:07.673587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:59.271 [2024-11-17 02:55:07.673619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.271 [2024-11-17 02:55:07.673649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:59.271 [2024-11-17 02:55:07.673669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.271 [2024-11-17 02:55:07.673688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:59.271 [2024-11-17 02:55:07.673708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.271 [2024-11-17 02:55:07.673728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:59.271 [2024-11-17 02:55:07.673747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.271 [2024-11-17 02:55:07.673766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:59.271 [2024-11-17 02:55:07.673785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.271 [2024-11-17 02:55:07.673804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:34:59.271 [2024-11-17 02:55:07.683476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:34:59.271 [2024-11-17 02:55:07.693527] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:59.271 [2024-11-17 02:55:07.693570] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:59.271 [2024-11-17 02:55:07.693591] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:59.271 [2024-11-17 02:55:07.693608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:59.271 [2024-11-17 02:55:07.693682] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:00.205 02:55:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:00.205 02:55:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:00.205 02:55:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:00.205 02:55:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.205 02:55:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:00.205 02:55:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:00.205 02:55:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:00.463 [2024-11-17 02:55:08.740148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:00.463 [2024-11-17 02:55:08.740220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:00.463 [2024-11-17 02:55:08.740268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:35:00.463 [2024-11-17 02:55:08.740316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:35:00.463 [2024-11-17 02:55:08.740905] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:35:00.463 [2024-11-17 02:55:08.740981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:00.463 [2024-11-17 02:55:08.741031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:00.463 [2024-11-17 02:55:08.741056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:00.463 [2024-11-17 02:55:08.741076] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:00.463 [2024-11-17 02:55:08.741093] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:00.463 [2024-11-17 02:55:08.741138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:00.463 [2024-11-17 02:55:08.741159] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:00.463 [2024-11-17 02:55:08.741188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:00.463 02:55:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.463 02:55:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:00.463 02:55:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:01.396 [2024-11-17 02:55:09.743702] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:01.396 [2024-11-17 02:55:09.743750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:01.396 [2024-11-17 02:55:09.743781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:01.396 [2024-11-17 02:55:09.743813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:01.396 [2024-11-17 02:55:09.743850] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:35:01.396 [2024-11-17 02:55:09.743870] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:01.396 [2024-11-17 02:55:09.743884] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:01.396 [2024-11-17 02:55:09.743895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:01.396 [2024-11-17 02:55:09.743956] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:01.396 [2024-11-17 02:55:09.744024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.396 [2024-11-17 02:55:09.744053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.396 [2024-11-17 02:55:09.744103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.396 [2024-11-17 02:55:09.744126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.396 [2024-11-17 02:55:09.744163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.396 [2024-11-17 02:55:09.744183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.396 [2024-11-17 02:55:09.744205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.396 [2024-11-17 02:55:09.744225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.396 [2024-11-17 02:55:09.744246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.396 [2024-11-17 02:55:09.744266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.396 [2024-11-17 02:55:09.744285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:35:01.396 [2024-11-17 02:55:09.744399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:35:01.396 [2024-11-17 02:55:09.745404] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:01.396 [2024-11-17 02:55:09.745438] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:35:01.396 02:55:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:01.396 02:55:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:01.396 02:55:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:01.396 02:55:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.396 02:55:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:01.396 02:55:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:01.396 02:55:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:01.396 02:55:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.396 02:55:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:01.396 02:55:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:01.396 02:55:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:01.396 02:55:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:01.396 02:55:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:01.396 02:55:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:01.396 02:55:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:01.396 02:55:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.396 02:55:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:01.396 02:55:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:01.396 02:55:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:01.396 02:55:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.655 02:55:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:01.655 02:55:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:02.586 02:55:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:02.586 02:55:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:02.586 02:55:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:02.587 02:55:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.587 02:55:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:02.587 02:55:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:02.587 02:55:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:02.587 02:55:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.587 02:55:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:02.587 02:55:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:03.521 [2024-11-17 02:55:11.807346] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:03.521 [2024-11-17 02:55:11.807410] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:03.521 [2024-11-17 02:55:11.807473] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:03.521 [2024-11-17 02:55:11.893757] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:03.521 02:55:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:03.521 02:55:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:03.521 02:55:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:03.521 02:55:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.521 02:55:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:03.521 02:55:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:03.521 02:55:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:03.521 02:55:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.521 02:55:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:03.521 02:55:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:03.780 [2024-11-17 02:55:12.076441] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:35:03.780 [2024-11-17 02:55:12.078214] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x6150001f3900:1 started. 00:35:03.780 [2024-11-17 02:55:12.080579] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:03.780 [2024-11-17 02:55:12.080657] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:03.780 [2024-11-17 02:55:12.080741] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:03.780 [2024-11-17 02:55:12.080783] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:03.780 [2024-11-17 02:55:12.080812] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:03.780 [2024-11-17 02:55:12.085068] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x6150001f3900 was disconnected and freed. delete nvme_qpair. 00:35:04.715 02:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:04.715 02:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:04.715 02:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:04.715 02:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.715 02:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:04.715 02:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:04.715 02:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:04.715 02:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.715 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:04.715 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:04.715 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3111661 00:35:04.715 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3111661 ']' 00:35:04.715 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3111661 00:35:04.715 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:04.715 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:04.715 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3111661 00:35:04.715 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:04.715 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:04.715 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3111661' 00:35:04.715 killing process with pid 3111661 00:35:04.715 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3111661 00:35:04.715 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3111661 00:35:05.652 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:05.652 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:05.652 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:35:05.652 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:05.652 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:35:05.652 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:05.652 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:05.652 rmmod nvme_tcp 00:35:05.652 rmmod nvme_fabrics 00:35:05.652 rmmod nvme_keyring 00:35:05.652 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:05.652 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:35:05.652 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:35:05.652 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3111513 ']' 00:35:05.652 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3111513 00:35:05.652 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3111513 ']' 00:35:05.652 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3111513 00:35:05.652 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:05.652 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:05.652 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3111513 00:35:05.652 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:05.652 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:05.652 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3111513' 00:35:05.652 killing process with pid 3111513 00:35:05.652 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3111513 00:35:05.652 02:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3111513 00:35:07.028 02:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:07.028 02:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:07.028 02:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:07.028 02:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:35:07.028 02:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:35:07.028 02:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:07.028 02:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:35:07.028 02:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:07.028 02:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:07.028 02:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:07.028 02:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:07.028 02:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:08.935 02:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:08.935 00:35:08.935 real 0m21.085s 00:35:08.935 user 0m30.740s 00:35:08.935 sys 0m3.439s 00:35:08.935 02:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:08.935 02:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:08.935 ************************************ 00:35:08.935 END TEST nvmf_discovery_remove_ifc 00:35:08.936 ************************************ 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.936 ************************************ 00:35:08.936 START TEST nvmf_identify_kernel_target 00:35:08.936 ************************************ 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:08.936 * Looking for test storage... 00:35:08.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:08.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.936 --rc genhtml_branch_coverage=1 00:35:08.936 --rc genhtml_function_coverage=1 00:35:08.936 --rc genhtml_legend=1 00:35:08.936 --rc geninfo_all_blocks=1 00:35:08.936 --rc geninfo_unexecuted_blocks=1 00:35:08.936 00:35:08.936 ' 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:08.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.936 --rc genhtml_branch_coverage=1 00:35:08.936 --rc genhtml_function_coverage=1 00:35:08.936 --rc genhtml_legend=1 00:35:08.936 --rc geninfo_all_blocks=1 00:35:08.936 --rc geninfo_unexecuted_blocks=1 00:35:08.936 00:35:08.936 ' 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:08.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.936 --rc genhtml_branch_coverage=1 00:35:08.936 --rc genhtml_function_coverage=1 00:35:08.936 --rc genhtml_legend=1 00:35:08.936 --rc geninfo_all_blocks=1 00:35:08.936 --rc geninfo_unexecuted_blocks=1 00:35:08.936 00:35:08.936 ' 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:08.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.936 --rc genhtml_branch_coverage=1 00:35:08.936 --rc genhtml_function_coverage=1 00:35:08.936 --rc genhtml_legend=1 00:35:08.936 --rc geninfo_all_blocks=1 00:35:08.936 --rc geninfo_unexecuted_blocks=1 00:35:08.936 00:35:08.936 ' 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.936 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:08.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:08.937 02:55:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:11.470 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:11.470 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:11.470 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:11.470 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:11.470 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:11.470 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:11.470 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:11.470 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:11.470 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:11.470 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:35:11.470 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:11.470 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:35:11.470 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:11.470 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:35:11.470 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:11.470 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:11.470 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:11.471 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:11.471 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:11.471 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:11.471 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:11.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:11.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:35:11.471 00:35:11.471 --- 10.0.0.2 ping statistics --- 00:35:11.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:11.471 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:11.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:11.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:35:11.471 00:35:11.471 --- 10.0.0.1 ping statistics --- 00:35:11.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:11.471 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:11.471 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:11.472 02:55:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:12.409 Waiting for block devices as requested 00:35:12.409 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:12.409 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:12.409 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:12.668 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:12.668 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:12.668 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:12.668 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:12.927 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:12.927 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:12.927 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:12.927 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:13.186 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:13.186 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:13.186 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:13.445 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:13.445 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:13.445 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:13.703 02:55:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:13.703 02:55:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:13.704 02:55:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:13.704 02:55:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:13.704 02:55:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:13.704 02:55:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:13.704 02:55:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:13.704 02:55:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:13.704 02:55:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:13.704 No valid GPT data, bailing 00:35:13.704 02:55:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:13.704 02:55:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:13.704 02:55:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:13.704 02:55:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:13.704 02:55:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:13.704 02:55:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:13.704 02:55:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:13.704 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:13.704 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:13.704 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:35:13.704 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:13.704 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:35:13.704 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:13.704 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:35:13.704 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:35:13.704 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:35:13.704 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:13.704 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:13.704 00:35:13.704 Discovery Log Number of Records 2, Generation counter 2 00:35:13.704 =====Discovery Log Entry 0====== 00:35:13.704 trtype: tcp 00:35:13.704 adrfam: ipv4 00:35:13.704 subtype: current discovery subsystem 00:35:13.704 treq: not specified, sq flow control disable supported 00:35:13.704 portid: 1 00:35:13.704 trsvcid: 4420 00:35:13.704 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:13.704 traddr: 10.0.0.1 00:35:13.704 eflags: none 00:35:13.704 sectype: none 00:35:13.704 =====Discovery Log Entry 1====== 00:35:13.704 trtype: tcp 00:35:13.704 adrfam: ipv4 00:35:13.704 subtype: nvme subsystem 00:35:13.704 treq: not specified, sq flow control disable supported 00:35:13.704 portid: 1 00:35:13.704 trsvcid: 4420 00:35:13.704 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:13.704 traddr: 10.0.0.1 00:35:13.704 eflags: none 00:35:13.704 sectype: none 00:35:13.704 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:13.704 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:13.963 ===================================================== 00:35:13.963 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:13.963 ===================================================== 00:35:13.963 Controller Capabilities/Features 00:35:13.963 ================================ 00:35:13.963 Vendor ID: 0000 00:35:13.963 Subsystem Vendor ID: 0000 00:35:13.963 Serial Number: 64015f2baeb70df5fcbf 00:35:13.963 Model Number: Linux 00:35:13.963 Firmware Version: 6.8.9-20 00:35:13.963 Recommended Arb Burst: 0 00:35:13.963 IEEE OUI Identifier: 00 00 00 00:35:13.963 Multi-path I/O 00:35:13.963 May have multiple subsystem ports: No 00:35:13.963 May have multiple controllers: No 00:35:13.963 Associated with SR-IOV VF: No 00:35:13.963 Max Data Transfer Size: Unlimited 00:35:13.963 Max Number of Namespaces: 0 00:35:13.963 Max Number of I/O Queues: 1024 00:35:13.963 NVMe Specification Version (VS): 1.3 00:35:13.963 NVMe Specification Version (Identify): 1.3 00:35:13.963 Maximum Queue Entries: 1024 00:35:13.963 Contiguous Queues Required: No 00:35:13.963 Arbitration Mechanisms Supported 00:35:13.963 Weighted Round Robin: Not Supported 00:35:13.963 Vendor Specific: Not Supported 00:35:13.963 Reset Timeout: 7500 ms 00:35:13.963 Doorbell Stride: 4 bytes 00:35:13.963 NVM Subsystem Reset: Not Supported 00:35:13.963 Command Sets Supported 00:35:13.963 NVM Command Set: Supported 00:35:13.963 Boot Partition: Not Supported 00:35:13.963 Memory Page Size Minimum: 4096 bytes 00:35:13.963 Memory Page Size Maximum: 4096 bytes 00:35:13.963 Persistent Memory Region: Not Supported 00:35:13.963 Optional Asynchronous Events Supported 00:35:13.963 Namespace Attribute Notices: Not Supported 00:35:13.963 Firmware Activation Notices: Not Supported 00:35:13.963 ANA Change Notices: Not Supported 00:35:13.963 PLE Aggregate Log Change Notices: Not Supported 00:35:13.963 LBA Status Info Alert Notices: Not Supported 00:35:13.963 EGE Aggregate Log Change Notices: Not Supported 00:35:13.963 Normal NVM Subsystem Shutdown event: Not Supported 00:35:13.963 Zone Descriptor Change Notices: Not Supported 00:35:13.963 Discovery Log Change Notices: Supported 00:35:13.963 Controller Attributes 00:35:13.963 128-bit Host Identifier: Not Supported 00:35:13.963 Non-Operational Permissive Mode: Not Supported 00:35:13.963 NVM Sets: Not Supported 00:35:13.963 Read Recovery Levels: Not Supported 00:35:13.963 Endurance Groups: Not Supported 00:35:13.963 Predictable Latency Mode: Not Supported 00:35:13.963 Traffic Based Keep ALive: Not Supported 00:35:13.963 Namespace Granularity: Not Supported 00:35:13.963 SQ Associations: Not Supported 00:35:13.963 UUID List: Not Supported 00:35:13.963 Multi-Domain Subsystem: Not Supported 00:35:13.963 Fixed Capacity Management: Not Supported 00:35:13.963 Variable Capacity Management: Not Supported 00:35:13.963 Delete Endurance Group: Not Supported 00:35:13.963 Delete NVM Set: Not Supported 00:35:13.963 Extended LBA Formats Supported: Not Supported 00:35:13.963 Flexible Data Placement Supported: Not Supported 00:35:13.963 00:35:13.963 Controller Memory Buffer Support 00:35:13.963 ================================ 00:35:13.963 Supported: No 00:35:13.963 00:35:13.963 Persistent Memory Region Support 00:35:13.964 ================================ 00:35:13.964 Supported: No 00:35:13.964 00:35:13.964 Admin Command Set Attributes 00:35:13.964 ============================ 00:35:13.964 Security Send/Receive: Not Supported 00:35:13.964 Format NVM: Not Supported 00:35:13.964 Firmware Activate/Download: Not Supported 00:35:13.964 Namespace Management: Not Supported 00:35:13.964 Device Self-Test: Not Supported 00:35:13.964 Directives: Not Supported 00:35:13.964 NVMe-MI: Not Supported 00:35:13.964 Virtualization Management: Not Supported 00:35:13.964 Doorbell Buffer Config: Not Supported 00:35:13.964 Get LBA Status Capability: Not Supported 00:35:13.964 Command & Feature Lockdown Capability: Not Supported 00:35:13.964 Abort Command Limit: 1 00:35:13.964 Async Event Request Limit: 1 00:35:13.964 Number of Firmware Slots: N/A 00:35:13.964 Firmware Slot 1 Read-Only: N/A 00:35:13.964 Firmware Activation Without Reset: N/A 00:35:13.964 Multiple Update Detection Support: N/A 00:35:13.964 Firmware Update Granularity: No Information Provided 00:35:13.964 Per-Namespace SMART Log: No 00:35:13.964 Asymmetric Namespace Access Log Page: Not Supported 00:35:13.964 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:13.964 Command Effects Log Page: Not Supported 00:35:13.964 Get Log Page Extended Data: Supported 00:35:13.964 Telemetry Log Pages: Not Supported 00:35:13.964 Persistent Event Log Pages: Not Supported 00:35:13.964 Supported Log Pages Log Page: May Support 00:35:13.964 Commands Supported & Effects Log Page: Not Supported 00:35:13.964 Feature Identifiers & Effects Log Page:May Support 00:35:13.964 NVMe-MI Commands & Effects Log Page: May Support 00:35:13.964 Data Area 4 for Telemetry Log: Not Supported 00:35:13.964 Error Log Page Entries Supported: 1 00:35:13.964 Keep Alive: Not Supported 00:35:13.964 00:35:13.964 NVM Command Set Attributes 00:35:13.964 ========================== 00:35:13.964 Submission Queue Entry Size 00:35:13.964 Max: 1 00:35:13.964 Min: 1 00:35:13.964 Completion Queue Entry Size 00:35:13.964 Max: 1 00:35:13.964 Min: 1 00:35:13.964 Number of Namespaces: 0 00:35:13.964 Compare Command: Not Supported 00:35:13.964 Write Uncorrectable Command: Not Supported 00:35:13.964 Dataset Management Command: Not Supported 00:35:13.964 Write Zeroes Command: Not Supported 00:35:13.964 Set Features Save Field: Not Supported 00:35:13.964 Reservations: Not Supported 00:35:13.964 Timestamp: Not Supported 00:35:13.964 Copy: Not Supported 00:35:13.964 Volatile Write Cache: Not Present 00:35:13.964 Atomic Write Unit (Normal): 1 00:35:13.964 Atomic Write Unit (PFail): 1 00:35:13.964 Atomic Compare & Write Unit: 1 00:35:13.964 Fused Compare & Write: Not Supported 00:35:13.964 Scatter-Gather List 00:35:13.964 SGL Command Set: Supported 00:35:13.964 SGL Keyed: Not Supported 00:35:13.964 SGL Bit Bucket Descriptor: Not Supported 00:35:13.964 SGL Metadata Pointer: Not Supported 00:35:13.964 Oversized SGL: Not Supported 00:35:13.964 SGL Metadata Address: Not Supported 00:35:13.964 SGL Offset: Supported 00:35:13.964 Transport SGL Data Block: Not Supported 00:35:13.964 Replay Protected Memory Block: Not Supported 00:35:13.964 00:35:13.964 Firmware Slot Information 00:35:13.964 ========================= 00:35:13.964 Active slot: 0 00:35:13.964 00:35:13.964 00:35:13.964 Error Log 00:35:13.964 ========= 00:35:13.964 00:35:13.964 Active Namespaces 00:35:13.964 ================= 00:35:13.964 Discovery Log Page 00:35:13.964 ================== 00:35:13.964 Generation Counter: 2 00:35:13.964 Number of Records: 2 00:35:13.964 Record Format: 0 00:35:13.964 00:35:13.964 Discovery Log Entry 0 00:35:13.964 ---------------------- 00:35:13.964 Transport Type: 3 (TCP) 00:35:13.964 Address Family: 1 (IPv4) 00:35:13.964 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:13.964 Entry Flags: 00:35:13.964 Duplicate Returned Information: 0 00:35:13.964 Explicit Persistent Connection Support for Discovery: 0 00:35:13.964 Transport Requirements: 00:35:13.964 Secure Channel: Not Specified 00:35:13.964 Port ID: 1 (0x0001) 00:35:13.964 Controller ID: 65535 (0xffff) 00:35:13.964 Admin Max SQ Size: 32 00:35:13.964 Transport Service Identifier: 4420 00:35:13.964 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:13.964 Transport Address: 10.0.0.1 00:35:13.964 Discovery Log Entry 1 00:35:13.964 ---------------------- 00:35:13.964 Transport Type: 3 (TCP) 00:35:13.964 Address Family: 1 (IPv4) 00:35:13.964 Subsystem Type: 2 (NVM Subsystem) 00:35:13.964 Entry Flags: 00:35:13.964 Duplicate Returned Information: 0 00:35:13.964 Explicit Persistent Connection Support for Discovery: 0 00:35:13.964 Transport Requirements: 00:35:13.964 Secure Channel: Not Specified 00:35:13.964 Port ID: 1 (0x0001) 00:35:13.964 Controller ID: 65535 (0xffff) 00:35:13.964 Admin Max SQ Size: 32 00:35:13.964 Transport Service Identifier: 4420 00:35:13.964 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:13.964 Transport Address: 10.0.0.1 00:35:13.964 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:14.223 get_feature(0x01) failed 00:35:14.223 get_feature(0x02) failed 00:35:14.223 get_feature(0x04) failed 00:35:14.223 ===================================================== 00:35:14.223 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:14.223 ===================================================== 00:35:14.223 Controller Capabilities/Features 00:35:14.223 ================================ 00:35:14.223 Vendor ID: 0000 00:35:14.223 Subsystem Vendor ID: 0000 00:35:14.223 Serial Number: 6d52feb09724a8c1593d 00:35:14.223 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:14.223 Firmware Version: 6.8.9-20 00:35:14.223 Recommended Arb Burst: 6 00:35:14.223 IEEE OUI Identifier: 00 00 00 00:35:14.223 Multi-path I/O 00:35:14.223 May have multiple subsystem ports: Yes 00:35:14.223 May have multiple controllers: Yes 00:35:14.223 Associated with SR-IOV VF: No 00:35:14.223 Max Data Transfer Size: Unlimited 00:35:14.223 Max Number of Namespaces: 1024 00:35:14.223 Max Number of I/O Queues: 128 00:35:14.223 NVMe Specification Version (VS): 1.3 00:35:14.223 NVMe Specification Version (Identify): 1.3 00:35:14.223 Maximum Queue Entries: 1024 00:35:14.223 Contiguous Queues Required: No 00:35:14.223 Arbitration Mechanisms Supported 00:35:14.223 Weighted Round Robin: Not Supported 00:35:14.223 Vendor Specific: Not Supported 00:35:14.223 Reset Timeout: 7500 ms 00:35:14.223 Doorbell Stride: 4 bytes 00:35:14.223 NVM Subsystem Reset: Not Supported 00:35:14.223 Command Sets Supported 00:35:14.223 NVM Command Set: Supported 00:35:14.223 Boot Partition: Not Supported 00:35:14.223 Memory Page Size Minimum: 4096 bytes 00:35:14.223 Memory Page Size Maximum: 4096 bytes 00:35:14.223 Persistent Memory Region: Not Supported 00:35:14.223 Optional Asynchronous Events Supported 00:35:14.223 Namespace Attribute Notices: Supported 00:35:14.223 Firmware Activation Notices: Not Supported 00:35:14.223 ANA Change Notices: Supported 00:35:14.223 PLE Aggregate Log Change Notices: Not Supported 00:35:14.223 LBA Status Info Alert Notices: Not Supported 00:35:14.223 EGE Aggregate Log Change Notices: Not Supported 00:35:14.223 Normal NVM Subsystem Shutdown event: Not Supported 00:35:14.223 Zone Descriptor Change Notices: Not Supported 00:35:14.223 Discovery Log Change Notices: Not Supported 00:35:14.223 Controller Attributes 00:35:14.223 128-bit Host Identifier: Supported 00:35:14.223 Non-Operational Permissive Mode: Not Supported 00:35:14.223 NVM Sets: Not Supported 00:35:14.223 Read Recovery Levels: Not Supported 00:35:14.223 Endurance Groups: Not Supported 00:35:14.223 Predictable Latency Mode: Not Supported 00:35:14.223 Traffic Based Keep ALive: Supported 00:35:14.223 Namespace Granularity: Not Supported 00:35:14.223 SQ Associations: Not Supported 00:35:14.223 UUID List: Not Supported 00:35:14.223 Multi-Domain Subsystem: Not Supported 00:35:14.223 Fixed Capacity Management: Not Supported 00:35:14.223 Variable Capacity Management: Not Supported 00:35:14.223 Delete Endurance Group: Not Supported 00:35:14.223 Delete NVM Set: Not Supported 00:35:14.223 Extended LBA Formats Supported: Not Supported 00:35:14.223 Flexible Data Placement Supported: Not Supported 00:35:14.223 00:35:14.223 Controller Memory Buffer Support 00:35:14.223 ================================ 00:35:14.223 Supported: No 00:35:14.223 00:35:14.223 Persistent Memory Region Support 00:35:14.223 ================================ 00:35:14.223 Supported: No 00:35:14.223 00:35:14.223 Admin Command Set Attributes 00:35:14.223 ============================ 00:35:14.223 Security Send/Receive: Not Supported 00:35:14.223 Format NVM: Not Supported 00:35:14.223 Firmware Activate/Download: Not Supported 00:35:14.223 Namespace Management: Not Supported 00:35:14.223 Device Self-Test: Not Supported 00:35:14.223 Directives: Not Supported 00:35:14.223 NVMe-MI: Not Supported 00:35:14.223 Virtualization Management: Not Supported 00:35:14.223 Doorbell Buffer Config: Not Supported 00:35:14.223 Get LBA Status Capability: Not Supported 00:35:14.223 Command & Feature Lockdown Capability: Not Supported 00:35:14.223 Abort Command Limit: 4 00:35:14.223 Async Event Request Limit: 4 00:35:14.223 Number of Firmware Slots: N/A 00:35:14.223 Firmware Slot 1 Read-Only: N/A 00:35:14.223 Firmware Activation Without Reset: N/A 00:35:14.223 Multiple Update Detection Support: N/A 00:35:14.223 Firmware Update Granularity: No Information Provided 00:35:14.223 Per-Namespace SMART Log: Yes 00:35:14.223 Asymmetric Namespace Access Log Page: Supported 00:35:14.223 ANA Transition Time : 10 sec 00:35:14.223 00:35:14.223 Asymmetric Namespace Access Capabilities 00:35:14.223 ANA Optimized State : Supported 00:35:14.223 ANA Non-Optimized State : Supported 00:35:14.223 ANA Inaccessible State : Supported 00:35:14.223 ANA Persistent Loss State : Supported 00:35:14.223 ANA Change State : Supported 00:35:14.223 ANAGRPID is not changed : No 00:35:14.223 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:14.223 00:35:14.223 ANA Group Identifier Maximum : 128 00:35:14.223 Number of ANA Group Identifiers : 128 00:35:14.223 Max Number of Allowed Namespaces : 1024 00:35:14.223 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:14.223 Command Effects Log Page: Supported 00:35:14.223 Get Log Page Extended Data: Supported 00:35:14.223 Telemetry Log Pages: Not Supported 00:35:14.223 Persistent Event Log Pages: Not Supported 00:35:14.223 Supported Log Pages Log Page: May Support 00:35:14.223 Commands Supported & Effects Log Page: Not Supported 00:35:14.223 Feature Identifiers & Effects Log Page:May Support 00:35:14.223 NVMe-MI Commands & Effects Log Page: May Support 00:35:14.223 Data Area 4 for Telemetry Log: Not Supported 00:35:14.223 Error Log Page Entries Supported: 128 00:35:14.223 Keep Alive: Supported 00:35:14.223 Keep Alive Granularity: 1000 ms 00:35:14.223 00:35:14.223 NVM Command Set Attributes 00:35:14.223 ========================== 00:35:14.223 Submission Queue Entry Size 00:35:14.223 Max: 64 00:35:14.223 Min: 64 00:35:14.223 Completion Queue Entry Size 00:35:14.223 Max: 16 00:35:14.223 Min: 16 00:35:14.223 Number of Namespaces: 1024 00:35:14.223 Compare Command: Not Supported 00:35:14.223 Write Uncorrectable Command: Not Supported 00:35:14.223 Dataset Management Command: Supported 00:35:14.223 Write Zeroes Command: Supported 00:35:14.223 Set Features Save Field: Not Supported 00:35:14.223 Reservations: Not Supported 00:35:14.223 Timestamp: Not Supported 00:35:14.223 Copy: Not Supported 00:35:14.223 Volatile Write Cache: Present 00:35:14.223 Atomic Write Unit (Normal): 1 00:35:14.223 Atomic Write Unit (PFail): 1 00:35:14.223 Atomic Compare & Write Unit: 1 00:35:14.223 Fused Compare & Write: Not Supported 00:35:14.223 Scatter-Gather List 00:35:14.223 SGL Command Set: Supported 00:35:14.223 SGL Keyed: Not Supported 00:35:14.223 SGL Bit Bucket Descriptor: Not Supported 00:35:14.223 SGL Metadata Pointer: Not Supported 00:35:14.223 Oversized SGL: Not Supported 00:35:14.223 SGL Metadata Address: Not Supported 00:35:14.223 SGL Offset: Supported 00:35:14.223 Transport SGL Data Block: Not Supported 00:35:14.223 Replay Protected Memory Block: Not Supported 00:35:14.223 00:35:14.223 Firmware Slot Information 00:35:14.224 ========================= 00:35:14.224 Active slot: 0 00:35:14.224 00:35:14.224 Asymmetric Namespace Access 00:35:14.224 =========================== 00:35:14.224 Change Count : 0 00:35:14.224 Number of ANA Group Descriptors : 1 00:35:14.224 ANA Group Descriptor : 0 00:35:14.224 ANA Group ID : 1 00:35:14.224 Number of NSID Values : 1 00:35:14.224 Change Count : 0 00:35:14.224 ANA State : 1 00:35:14.224 Namespace Identifier : 1 00:35:14.224 00:35:14.224 Commands Supported and Effects 00:35:14.224 ============================== 00:35:14.224 Admin Commands 00:35:14.224 -------------- 00:35:14.224 Get Log Page (02h): Supported 00:35:14.224 Identify (06h): Supported 00:35:14.224 Abort (08h): Supported 00:35:14.224 Set Features (09h): Supported 00:35:14.224 Get Features (0Ah): Supported 00:35:14.224 Asynchronous Event Request (0Ch): Supported 00:35:14.224 Keep Alive (18h): Supported 00:35:14.224 I/O Commands 00:35:14.224 ------------ 00:35:14.224 Flush (00h): Supported 00:35:14.224 Write (01h): Supported LBA-Change 00:35:14.224 Read (02h): Supported 00:35:14.224 Write Zeroes (08h): Supported LBA-Change 00:35:14.224 Dataset Management (09h): Supported 00:35:14.224 00:35:14.224 Error Log 00:35:14.224 ========= 00:35:14.224 Entry: 0 00:35:14.224 Error Count: 0x3 00:35:14.224 Submission Queue Id: 0x0 00:35:14.224 Command Id: 0x5 00:35:14.224 Phase Bit: 0 00:35:14.224 Status Code: 0x2 00:35:14.224 Status Code Type: 0x0 00:35:14.224 Do Not Retry: 1 00:35:14.224 Error Location: 0x28 00:35:14.224 LBA: 0x0 00:35:14.224 Namespace: 0x0 00:35:14.224 Vendor Log Page: 0x0 00:35:14.224 ----------- 00:35:14.224 Entry: 1 00:35:14.224 Error Count: 0x2 00:35:14.224 Submission Queue Id: 0x0 00:35:14.224 Command Id: 0x5 00:35:14.224 Phase Bit: 0 00:35:14.224 Status Code: 0x2 00:35:14.224 Status Code Type: 0x0 00:35:14.224 Do Not Retry: 1 00:35:14.224 Error Location: 0x28 00:35:14.224 LBA: 0x0 00:35:14.224 Namespace: 0x0 00:35:14.224 Vendor Log Page: 0x0 00:35:14.224 ----------- 00:35:14.224 Entry: 2 00:35:14.224 Error Count: 0x1 00:35:14.224 Submission Queue Id: 0x0 00:35:14.224 Command Id: 0x4 00:35:14.224 Phase Bit: 0 00:35:14.224 Status Code: 0x2 00:35:14.224 Status Code Type: 0x0 00:35:14.224 Do Not Retry: 1 00:35:14.224 Error Location: 0x28 00:35:14.224 LBA: 0x0 00:35:14.224 Namespace: 0x0 00:35:14.224 Vendor Log Page: 0x0 00:35:14.224 00:35:14.224 Number of Queues 00:35:14.224 ================ 00:35:14.224 Number of I/O Submission Queues: 128 00:35:14.224 Number of I/O Completion Queues: 128 00:35:14.224 00:35:14.224 ZNS Specific Controller Data 00:35:14.224 ============================ 00:35:14.224 Zone Append Size Limit: 0 00:35:14.224 00:35:14.224 00:35:14.224 Active Namespaces 00:35:14.224 ================= 00:35:14.224 get_feature(0x05) failed 00:35:14.224 Namespace ID:1 00:35:14.224 Command Set Identifier: NVM (00h) 00:35:14.224 Deallocate: Supported 00:35:14.224 Deallocated/Unwritten Error: Not Supported 00:35:14.224 Deallocated Read Value: Unknown 00:35:14.224 Deallocate in Write Zeroes: Not Supported 00:35:14.224 Deallocated Guard Field: 0xFFFF 00:35:14.224 Flush: Supported 00:35:14.224 Reservation: Not Supported 00:35:14.224 Namespace Sharing Capabilities: Multiple Controllers 00:35:14.224 Size (in LBAs): 1953525168 (931GiB) 00:35:14.224 Capacity (in LBAs): 1953525168 (931GiB) 00:35:14.224 Utilization (in LBAs): 1953525168 (931GiB) 00:35:14.224 UUID: e9b5df88-dc7a-45e0-926d-c37c9ada00d5 00:35:14.224 Thin Provisioning: Not Supported 00:35:14.224 Per-NS Atomic Units: Yes 00:35:14.224 Atomic Boundary Size (Normal): 0 00:35:14.224 Atomic Boundary Size (PFail): 0 00:35:14.224 Atomic Boundary Offset: 0 00:35:14.224 NGUID/EUI64 Never Reused: No 00:35:14.224 ANA group ID: 1 00:35:14.224 Namespace Write Protected: No 00:35:14.224 Number of LBA Formats: 1 00:35:14.224 Current LBA Format: LBA Format #00 00:35:14.224 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:14.224 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:14.224 rmmod nvme_tcp 00:35:14.224 rmmod nvme_fabrics 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:14.224 02:55:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:16.126 02:55:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:16.126 02:55:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:16.126 02:55:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:16.126 02:55:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:35:16.383 02:55:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:16.383 02:55:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:16.383 02:55:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:16.383 02:55:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:16.383 02:55:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:16.383 02:55:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:16.383 02:55:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:17.318 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:17.318 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:17.318 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:17.318 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:17.318 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:17.318 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:17.319 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:17.319 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:17.319 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:17.319 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:17.577 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:17.577 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:17.577 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:17.577 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:17.577 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:17.577 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:18.511 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:18.511 00:35:18.511 real 0m9.662s 00:35:18.511 user 0m2.184s 00:35:18.511 sys 0m3.509s 00:35:18.511 02:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:18.511 02:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:18.511 ************************************ 00:35:18.511 END TEST nvmf_identify_kernel_target 00:35:18.511 ************************************ 00:35:18.512 02:55:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:18.512 02:55:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:18.512 02:55:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:18.512 02:55:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.512 ************************************ 00:35:18.512 START TEST nvmf_auth_host 00:35:18.512 ************************************ 00:35:18.512 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:18.512 * Looking for test storage... 00:35:18.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:18.512 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:18.512 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:35:18.512 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:18.771 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:18.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.771 --rc genhtml_branch_coverage=1 00:35:18.771 --rc genhtml_function_coverage=1 00:35:18.771 --rc genhtml_legend=1 00:35:18.771 --rc geninfo_all_blocks=1 00:35:18.771 --rc geninfo_unexecuted_blocks=1 00:35:18.771 00:35:18.771 ' 00:35:18.772 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:18.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.772 --rc genhtml_branch_coverage=1 00:35:18.772 --rc genhtml_function_coverage=1 00:35:18.772 --rc genhtml_legend=1 00:35:18.772 --rc geninfo_all_blocks=1 00:35:18.772 --rc geninfo_unexecuted_blocks=1 00:35:18.772 00:35:18.772 ' 00:35:18.772 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:18.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.772 --rc genhtml_branch_coverage=1 00:35:18.772 --rc genhtml_function_coverage=1 00:35:18.772 --rc genhtml_legend=1 00:35:18.772 --rc geninfo_all_blocks=1 00:35:18.772 --rc geninfo_unexecuted_blocks=1 00:35:18.772 00:35:18.772 ' 00:35:18.772 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:18.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.772 --rc genhtml_branch_coverage=1 00:35:18.772 --rc genhtml_function_coverage=1 00:35:18.772 --rc genhtml_legend=1 00:35:18.772 --rc geninfo_all_blocks=1 00:35:18.772 --rc geninfo_unexecuted_blocks=1 00:35:18.772 00:35:18.772 ' 00:35:18.772 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:18.772 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:18.772 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:18.772 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:18.772 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:18.772 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:18.772 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:18.772 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:18.772 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:18.772 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:18.772 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:18.772 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:18.772 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:18.772 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:18.772 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:18.772 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:18.772 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:18.772 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:18.772 02:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:18.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:35:18.772 02:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:20.674 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:20.674 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:20.675 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:20.675 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:20.675 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:20.675 02:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:20.675 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:20.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:20.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:35:20.933 00:35:20.933 --- 10.0.0.2 ping statistics --- 00:35:20.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:20.933 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:20.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:20.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:35:20.933 00:35:20.933 --- 10.0.0.1 ping statistics --- 00:35:20.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:20.933 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3119150 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3119150 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3119150 ']' 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:20.933 02:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b13980134a2d8a6d2df3365c9d3c8b0f 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.N97 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b13980134a2d8a6d2df3365c9d3c8b0f 0 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b13980134a2d8a6d2df3365c9d3c8b0f 0 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b13980134a2d8a6d2df3365c9d3c8b0f 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:21.867 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.N97 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.N97 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.N97 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=57f3ed0064bbb228dee43aa9aa56f8bbd444d1b432f48bae224cb72108e37fbe 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.SQa 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 57f3ed0064bbb228dee43aa9aa56f8bbd444d1b432f48bae224cb72108e37fbe 3 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 57f3ed0064bbb228dee43aa9aa56f8bbd444d1b432f48bae224cb72108e37fbe 3 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=57f3ed0064bbb228dee43aa9aa56f8bbd444d1b432f48bae224cb72108e37fbe 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.SQa 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.SQa 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.SQa 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=10819dbbeb1ec28477547119b0fd4ca828e5bcfcf89d98a6 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.R7A 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 10819dbbeb1ec28477547119b0fd4ca828e5bcfcf89d98a6 0 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 10819dbbeb1ec28477547119b0fd4ca828e5bcfcf89d98a6 0 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=10819dbbeb1ec28477547119b0fd4ca828e5bcfcf89d98a6 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.R7A 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.R7A 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.R7A 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=00ef145db3de821c6f348766f7f117378c10c43d50cdaed5 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.7Ok 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 00ef145db3de821c6f348766f7f117378c10c43d50cdaed5 2 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 00ef145db3de821c6f348766f7f117378c10c43d50cdaed5 2 00:35:22.126 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=00ef145db3de821c6f348766f7f117378c10c43d50cdaed5 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.7Ok 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.7Ok 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.7Ok 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8da2e2d2c315abd35fada03043543572 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Zxs 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8da2e2d2c315abd35fada03043543572 1 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8da2e2d2c315abd35fada03043543572 1 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8da2e2d2c315abd35fada03043543572 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Zxs 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Zxs 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Zxs 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=823ed16363a8da05770c8b08c79cbb8a 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.QXg 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 823ed16363a8da05770c8b08c79cbb8a 1 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 823ed16363a8da05770c8b08c79cbb8a 1 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=823ed16363a8da05770c8b08c79cbb8a 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:35:22.127 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.QXg 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.QXg 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.QXg 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=08d4a1b868020fa9e181e8226884609fcf5148244855e3e8 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.daZ 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 08d4a1b868020fa9e181e8226884609fcf5148244855e3e8 2 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 08d4a1b868020fa9e181e8226884609fcf5148244855e3e8 2 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=08d4a1b868020fa9e181e8226884609fcf5148244855e3e8 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.daZ 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.daZ 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.daZ 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=68cfbe9a73569a00cd6959e376328774 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.29i 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 68cfbe9a73569a00cd6959e376328774 0 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 68cfbe9a73569a00cd6959e376328774 0 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=68cfbe9a73569a00cd6959e376328774 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:22.386 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.29i 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.29i 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.29i 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=622f705bda72c40995dec71e198d2752e45f970a481da2c3679df1d1a3641257 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Txy 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 622f705bda72c40995dec71e198d2752e45f970a481da2c3679df1d1a3641257 3 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 622f705bda72c40995dec71e198d2752e45f970a481da2c3679df1d1a3641257 3 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=622f705bda72c40995dec71e198d2752e45f970a481da2c3679df1d1a3641257 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Txy 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Txy 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Txy 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3119150 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3119150 ']' 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:22.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:22.387 02:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.N97 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.SQa ]] 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SQa 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.R7A 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.7Ok ]] 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7Ok 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Zxs 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.QXg ]] 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QXg 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.daZ 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.29i ]] 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.29i 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Txy 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.646 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:22.905 02:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:23.839 Waiting for block devices as requested 00:35:23.839 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:23.839 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:24.098 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:24.098 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:24.098 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:24.356 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:24.356 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:24.356 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:24.356 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:24.614 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:24.614 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:24.614 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:24.614 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:24.872 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:24.872 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:24.872 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:25.129 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:25.388 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:25.388 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:25.388 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:25.388 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:25.388 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:25.388 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:25.388 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:25.388 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:25.388 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:25.388 No valid GPT data, bailing 00:35:25.388 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:25.388 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:35:25.388 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:35:25.388 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:25.388 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:25.388 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:25.388 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:25.388 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:25.388 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:25.388 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:35:25.388 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:25.388 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:35:25.388 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:25.389 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:35:25.389 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:35:25.389 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:35:25.389 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:25.389 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:25.647 00:35:25.647 Discovery Log Number of Records 2, Generation counter 2 00:35:25.647 =====Discovery Log Entry 0====== 00:35:25.647 trtype: tcp 00:35:25.647 adrfam: ipv4 00:35:25.647 subtype: current discovery subsystem 00:35:25.647 treq: not specified, sq flow control disable supported 00:35:25.647 portid: 1 00:35:25.647 trsvcid: 4420 00:35:25.647 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:25.647 traddr: 10.0.0.1 00:35:25.647 eflags: none 00:35:25.647 sectype: none 00:35:25.647 =====Discovery Log Entry 1====== 00:35:25.647 trtype: tcp 00:35:25.647 adrfam: ipv4 00:35:25.647 subtype: nvme subsystem 00:35:25.647 treq: not specified, sq flow control disable supported 00:35:25.647 portid: 1 00:35:25.647 trsvcid: 4420 00:35:25.647 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:25.647 traddr: 10.0.0.1 00:35:25.647 eflags: none 00:35:25.647 sectype: none 00:35:25.647 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:25.647 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:25.647 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:25.647 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:25.647 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.647 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:25.647 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: ]] 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.648 02:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.648 nvme0n1 00:35:25.648 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.648 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.648 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.648 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.648 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.648 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.648 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.648 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.648 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.648 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: ]] 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.908 nvme0n1 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: ]] 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.908 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.167 nvme0n1 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: ]] 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.167 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.426 nvme0n1 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: ]] 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.426 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.685 nvme0n1 00:35:26.685 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.685 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.685 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.685 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.685 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.685 02:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.685 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.943 nvme0n1 00:35:26.943 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.943 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.943 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.943 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.943 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.943 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.943 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.943 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: ]] 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.944 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.203 nvme0n1 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: ]] 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.203 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.462 nvme0n1 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: ]] 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.462 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.721 nvme0n1 00:35:27.721 02:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: ]] 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:27.721 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:27.722 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:27.722 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.722 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.981 nvme0n1 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.981 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.299 nvme0n1 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: ]] 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.299 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.591 nvme0n1 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: ]] 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.591 02:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.849 nvme0n1 00:35:28.849 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.849 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.849 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.849 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.849 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.849 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.849 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.849 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.849 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.849 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.107 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.107 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.107 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:29.107 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.107 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:29.107 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:29.107 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:29.107 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: ]] 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.108 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.366 nvme0n1 00:35:29.366 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.366 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.366 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.366 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.366 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.366 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.366 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.366 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.366 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.366 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.366 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.366 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.366 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:29.366 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.366 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:29.366 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:29.366 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:29.366 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: ]] 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.367 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.625 nvme0n1 00:35:29.625 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.625 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.625 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.625 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.625 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.625 02:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.625 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.625 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.625 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.625 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.625 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.625 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.625 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.626 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.884 nvme0n1 00:35:29.884 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.884 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.884 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.884 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.884 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.884 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: ]] 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.143 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.710 nvme0n1 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: ]] 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:30.710 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:30.711 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:30.711 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.711 02:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.274 nvme0n1 00:35:31.274 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: ]] 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.275 02:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.841 nvme0n1 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:31.841 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: ]] 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.842 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.409 nvme0n1 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:32.409 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.410 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:32.410 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.410 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.410 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.410 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.410 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:32.410 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:32.410 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:32.410 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.410 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.410 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:32.410 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.410 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:32.410 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:32.410 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:32.410 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:32.410 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.410 02:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.976 nvme0n1 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: ]] 00:35:32.976 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.977 02:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.911 nvme0n1 00:35:33.911 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.911 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.911 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.911 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.911 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.911 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.911 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.911 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.911 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.911 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: ]] 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.169 02:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.103 nvme0n1 00:35:35.103 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.103 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.103 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.103 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.103 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.103 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.103 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.103 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.103 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.103 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.103 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.103 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.103 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:35.103 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.103 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:35.103 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:35.103 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:35.103 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:35.103 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:35.103 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:35.103 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:35.103 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: ]] 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.104 02:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.038 nvme0n1 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: ]] 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.038 02:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.972 nvme0n1 00:35:36.972 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.230 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.230 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.230 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.230 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.230 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.230 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.230 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.230 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.230 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.230 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.230 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.230 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:37.230 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.230 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:37.230 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:37.230 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:37.230 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:37.230 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:37.230 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:37.230 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.231 02:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.165 nvme0n1 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: ]] 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:38.165 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:38.166 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.166 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.424 nvme0n1 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: ]] 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.424 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.683 nvme0n1 00:35:38.683 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.683 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.683 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.683 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.683 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.683 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.683 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.683 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.683 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.683 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.683 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.683 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.683 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:38.683 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.683 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:38.683 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:38.683 02:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: ]] 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:38.683 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.684 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:38.684 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:38.684 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:38.684 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:38.684 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.684 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.942 nvme0n1 00:35:38.942 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.942 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.942 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.942 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.942 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.942 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.942 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.942 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.942 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.942 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.942 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.942 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.942 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:38.942 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.942 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:38.942 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:38.942 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:38.942 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:38.942 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:38.942 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:38.942 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:38.942 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: ]] 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.943 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.201 nvme0n1 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.202 nvme0n1 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.202 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: ]] 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.460 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.461 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:39.461 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:39.461 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:39.461 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.461 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.461 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:39.461 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.461 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:39.461 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:39.461 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:39.461 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:39.461 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.461 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.461 nvme0n1 00:35:39.461 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.461 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.461 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.461 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.461 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: ]] 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.719 02:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.977 nvme0n1 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: ]] 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.977 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.235 nvme0n1 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: ]] 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:40.235 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.236 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.493 nvme0n1 00:35:40.493 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.494 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.752 nvme0n1 00:35:40.752 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.752 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.752 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.752 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.752 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.752 02:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: ]] 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.752 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.010 nvme0n1 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: ]] 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.010 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.576 nvme0n1 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: ]] 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:41.576 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.577 02:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.835 nvme0n1 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: ]] 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.835 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.093 nvme0n1 00:35:42.093 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.093 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.093 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.093 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.093 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.094 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.660 nvme0n1 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: ]] 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.660 02:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.227 nvme0n1 00:35:43.227 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.227 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.227 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.227 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.227 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.227 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.227 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.227 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.227 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.227 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.227 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.227 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.227 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:43.227 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.227 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:43.227 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:43.227 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:43.227 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:43.227 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:43.227 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:43.227 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: ]] 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.228 02:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.794 nvme0n1 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: ]] 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:43.794 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:43.795 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:43.795 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.795 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.795 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:43.795 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.795 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:43.795 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:43.795 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:43.795 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:43.795 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.795 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.375 nvme0n1 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: ]] 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.375 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:44.376 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.376 02:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.941 nvme0n1 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.941 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.942 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:44.942 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:44.942 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.942 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:44.942 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.942 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.942 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.942 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.942 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.942 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.942 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.942 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.942 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.942 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.942 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.942 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.942 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.942 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.942 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:44.942 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.942 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.508 nvme0n1 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: ]] 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.508 02:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.442 nvme0n1 00:35:46.442 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.442 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.442 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.442 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.442 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.442 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.442 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.442 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.442 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.442 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: ]] 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.701 02:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.635 nvme0n1 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: ]] 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.635 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.636 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:47.636 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.636 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:47.636 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:47.636 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:47.636 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:47.636 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.636 02:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.569 nvme0n1 00:35:48.569 02:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.569 02:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.569 02:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.569 02:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.569 02:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.569 02:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.569 02:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.569 02:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.569 02:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.569 02:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.569 02:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: ]] 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.569 02:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.954 nvme0n1 00:35:49.954 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.954 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.954 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.954 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.954 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.954 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.954 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.954 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.954 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.954 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.954 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.954 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.955 02:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.889 nvme0n1 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: ]] 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.889 nvme0n1 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.889 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: ]] 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.890 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.149 nvme0n1 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: ]] 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.149 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.407 nvme0n1 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: ]] 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:51.407 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:51.408 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:51.408 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.408 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:51.408 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.408 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.408 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.408 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.408 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:51.408 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:51.408 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:51.408 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.408 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.408 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:51.408 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.408 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:51.408 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:51.408 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:51.408 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:51.408 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.408 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.667 nvme0n1 00:35:51.667 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.667 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.667 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.667 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.667 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.667 02:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.667 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.926 nvme0n1 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: ]] 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.926 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.185 nvme0n1 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: ]] 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:52.185 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:52.186 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:52.186 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.186 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.186 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:52.186 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.186 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:52.186 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:52.186 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:52.186 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:52.186 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.186 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.444 nvme0n1 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:52.444 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: ]] 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.445 02:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.703 nvme0n1 00:35:52.703 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.703 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: ]] 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.704 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.963 nvme0n1 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.963 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.222 nvme0n1 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: ]] 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.222 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.482 nvme0n1 00:35:53.482 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.482 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.482 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.482 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.482 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.482 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: ]] 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:53.740 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:53.741 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.741 02:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.000 nvme0n1 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: ]] 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.000 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.258 nvme0n1 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: ]] 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.258 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.517 nvme0n1 00:35:54.517 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.776 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.776 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.776 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.776 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.776 02:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.776 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.777 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.777 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.777 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:54.777 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.777 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:54.777 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:54.777 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:54.777 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:54.777 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.777 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.036 nvme0n1 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: ]] 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.036 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.603 nvme0n1 00:35:55.603 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.603 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.603 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.603 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.603 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.603 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.603 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: ]] 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.604 02:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.604 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.604 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.604 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:55.604 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:55.604 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:55.604 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.604 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.604 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:55.604 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.604 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:55.604 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:55.604 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:55.604 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:55.604 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.604 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.171 nvme0n1 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: ]] 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.171 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.172 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:56.172 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:56.172 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.172 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:56.172 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.172 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.430 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.430 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.430 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.430 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.430 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.430 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.430 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.430 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.430 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.430 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.430 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.430 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.430 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:56.430 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.430 02:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.997 nvme0n1 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: ]] 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.997 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.636 nvme0n1 00:35:57.636 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.636 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.636 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.636 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.636 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.636 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.636 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.636 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.636 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.636 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.636 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.636 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.636 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:57.636 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.636 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.636 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:57.636 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:57.636 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.637 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.203 nvme0n1 00:35:58.203 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.203 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.203 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.203 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.203 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.203 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.203 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.203 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.203 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.203 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.203 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjEzOTgwMTM0YTJkOGE2ZDJkZjMzNjVjOWQzYzhiMGZoLVU0: 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: ]] 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmM2VkMDA2NGJiYjIyOGRlZTQzYWE5YWE1NmY4YmJkNDQ0ZDFiNDMyZjQ4YmFlMjI0Y2I3MjEwOGUzN2ZiZZEqtHQ=: 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.204 02:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.141 nvme0n1 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: ]] 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.141 02:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.077 nvme0n1 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: ]] 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.077 02:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.452 nvme0n1 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDhkNGExYjg2ODAyMGZhOWUxODFlODIyNjg4NDYwOWZjZjUxNDgyNDQ4NTVlM2U4tlD2ng==: 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: ]] 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjhjZmJlOWE3MzU2OWEwMGNkNjk1OWUzNzYzMjg3NzR9R3Qo: 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.452 02:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.387 nvme0n1 00:36:02.387 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.387 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.387 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.387 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.387 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.387 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.387 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.387 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.387 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.387 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.387 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.387 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.387 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:02.387 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.387 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjIyZjcwNWJkYTcyYzQwOTk1ZGVjNzFlMTk4ZDI3NTJlNDVmOTcwYTQ4MWRhMmMzNjc5ZGYxZDFhMzY0MTI1NwmpRBo=: 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.388 02:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.322 nvme0n1 00:36:03.322 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.322 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.322 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.322 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.322 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.322 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.322 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.322 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.322 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.322 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.322 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.322 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:03.322 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.322 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:03.322 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:03.322 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:03.322 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: ]] 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.323 request: 00:36:03.323 { 00:36:03.323 "name": "nvme0", 00:36:03.323 "trtype": "tcp", 00:36:03.323 "traddr": "10.0.0.1", 00:36:03.323 "adrfam": "ipv4", 00:36:03.323 "trsvcid": "4420", 00:36:03.323 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:03.323 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:03.323 "prchk_reftag": false, 00:36:03.323 "prchk_guard": false, 00:36:03.323 "hdgst": false, 00:36:03.323 "ddgst": false, 00:36:03.323 "allow_unrecognized_csi": false, 00:36:03.323 "method": "bdev_nvme_attach_controller", 00:36:03.323 "req_id": 1 00:36:03.323 } 00:36:03.323 Got JSON-RPC error response 00:36:03.323 response: 00:36:03.323 { 00:36:03.323 "code": -5, 00:36:03.323 "message": "Input/output error" 00:36:03.323 } 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.323 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.582 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.582 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:03.582 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:03.582 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:03.582 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:03.582 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:03.582 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.582 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.582 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:03.582 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:03.582 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:03.582 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:03.582 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:03.582 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.583 request: 00:36:03.583 { 00:36:03.583 "name": "nvme0", 00:36:03.583 "trtype": "tcp", 00:36:03.583 "traddr": "10.0.0.1", 00:36:03.583 "adrfam": "ipv4", 00:36:03.583 "trsvcid": "4420", 00:36:03.583 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:03.583 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:03.583 "prchk_reftag": false, 00:36:03.583 "prchk_guard": false, 00:36:03.583 "hdgst": false, 00:36:03.583 "ddgst": false, 00:36:03.583 "dhchap_key": "key2", 00:36:03.583 "allow_unrecognized_csi": false, 00:36:03.583 "method": "bdev_nvme_attach_controller", 00:36:03.583 "req_id": 1 00:36:03.583 } 00:36:03.583 Got JSON-RPC error response 00:36:03.583 response: 00:36:03.583 { 00:36:03.583 "code": -5, 00:36:03.583 "message": "Input/output error" 00:36:03.583 } 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.583 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.842 request: 00:36:03.842 { 00:36:03.842 "name": "nvme0", 00:36:03.842 "trtype": "tcp", 00:36:03.842 "traddr": "10.0.0.1", 00:36:03.842 "adrfam": "ipv4", 00:36:03.842 "trsvcid": "4420", 00:36:03.842 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:03.842 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:03.842 "prchk_reftag": false, 00:36:03.842 "prchk_guard": false, 00:36:03.842 "hdgst": false, 00:36:03.842 "ddgst": false, 00:36:03.842 "dhchap_key": "key1", 00:36:03.842 "dhchap_ctrlr_key": "ckey2", 00:36:03.842 "allow_unrecognized_csi": false, 00:36:03.842 "method": "bdev_nvme_attach_controller", 00:36:03.842 "req_id": 1 00:36:03.842 } 00:36:03.842 Got JSON-RPC error response 00:36:03.842 response: 00:36:03.842 { 00:36:03.842 "code": -5, 00:36:03.842 "message": "Input/output error" 00:36:03.842 } 00:36:03.842 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:03.842 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:03.842 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:03.842 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:03.842 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:03.842 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:36:03.842 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:03.842 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:03.842 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:03.842 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.842 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.842 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:03.842 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.843 nvme0n1 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: ]] 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.843 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.102 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.102 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:04.102 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:04.102 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:04.102 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:04.102 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:04.102 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:04.102 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:04.102 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:04.102 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.102 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.102 request: 00:36:04.102 { 00:36:04.102 "name": "nvme0", 00:36:04.102 "dhchap_key": "key1", 00:36:04.102 "dhchap_ctrlr_key": "ckey2", 00:36:04.102 "method": "bdev_nvme_set_keys", 00:36:04.102 "req_id": 1 00:36:04.102 } 00:36:04.102 Got JSON-RPC error response 00:36:04.102 response: 00:36:04.102 { 00:36:04.102 "code": -13, 00:36:04.102 "message": "Permission denied" 00:36:04.102 } 00:36:04.102 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:04.102 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:04.102 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:04.102 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:04.103 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:04.103 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.103 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.103 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:04.103 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.103 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.103 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:04.103 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:05.036 02:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.036 02:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:05.036 02:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.036 02:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.036 02:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.036 02:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:05.036 02:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA4MTlkYmJlYjFlYzI4NDc3NTQ3MTE5YjBmZDRjYTgyOGU1YmNmY2Y4OWQ5OGE2qB6Wqw==: 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: ]] 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDBlZjE0NWRiM2RlODIxYzZmMzQ4NzY2ZjdmMTE3Mzc4YzEwYzQzZDUwY2RhZWQ15VD1Dw==: 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.410 nvme0n1 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRhMmUyZDJjMzE1YWJkMzVmYWRhMDMwNDM1NDM1NzLeN7uE: 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: ]] 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODIzZWQxNjM2M2E4ZGEwNTc3MGM4YjA4Yzc5Y2JiOGGpmDto: 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.410 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.410 request: 00:36:06.411 { 00:36:06.411 "name": "nvme0", 00:36:06.411 "dhchap_key": "key2", 00:36:06.411 "dhchap_ctrlr_key": "ckey1", 00:36:06.411 "method": "bdev_nvme_set_keys", 00:36:06.411 "req_id": 1 00:36:06.411 } 00:36:06.411 Got JSON-RPC error response 00:36:06.411 response: 00:36:06.411 { 00:36:06.411 "code": -13, 00:36:06.411 "message": "Permission denied" 00:36:06.411 } 00:36:06.411 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:06.411 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:06.411 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:06.411 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:06.411 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:06.411 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.411 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:06.411 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.411 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.411 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.411 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:06.411 02:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:07.785 rmmod nvme_tcp 00:36:07.785 rmmod nvme_fabrics 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3119150 ']' 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3119150 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3119150 ']' 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3119150 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3119150 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3119150' 00:36:07.785 killing process with pid 3119150 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3119150 00:36:07.785 02:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3119150 00:36:08.720 02:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:08.720 02:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:08.720 02:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:08.720 02:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:36:08.720 02:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:36:08.720 02:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:08.720 02:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:36:08.720 02:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:08.720 02:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:08.720 02:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:08.720 02:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:08.720 02:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:10.625 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:10.625 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:10.625 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:10.625 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:10.625 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:10.625 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:36:10.625 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:10.625 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:10.625 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:10.625 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:10.625 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:10.625 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:10.625 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:12.002 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:12.002 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:12.002 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:12.002 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:12.002 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:12.002 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:12.002 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:12.002 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:12.002 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:12.002 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:12.002 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:12.002 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:12.002 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:12.002 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:12.002 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:12.002 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:12.938 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:12.938 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.N97 /tmp/spdk.key-null.R7A /tmp/spdk.key-sha256.Zxs /tmp/spdk.key-sha384.daZ /tmp/spdk.key-sha512.Txy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:12.938 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:14.312 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:14.312 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:14.312 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:14.312 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:14.312 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:14.312 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:14.312 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:14.312 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:14.312 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:14.312 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:14.312 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:14.312 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:14.312 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:14.312 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:14.312 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:14.312 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:14.312 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:14.312 00:36:14.312 real 0m55.821s 00:36:14.312 user 0m53.249s 00:36:14.312 sys 0m6.298s 00:36:14.312 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:14.312 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.312 ************************************ 00:36:14.312 END TEST nvmf_auth_host 00:36:14.312 ************************************ 00:36:14.312 02:56:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:36:14.312 02:56:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:14.312 02:56:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:14.312 02:56:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:14.313 02:56:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.313 ************************************ 00:36:14.313 START TEST nvmf_digest 00:36:14.313 ************************************ 00:36:14.313 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:14.571 * Looking for test storage... 00:36:14.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:14.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.571 --rc genhtml_branch_coverage=1 00:36:14.571 --rc genhtml_function_coverage=1 00:36:14.571 --rc genhtml_legend=1 00:36:14.571 --rc geninfo_all_blocks=1 00:36:14.571 --rc geninfo_unexecuted_blocks=1 00:36:14.571 00:36:14.571 ' 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:14.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.571 --rc genhtml_branch_coverage=1 00:36:14.571 --rc genhtml_function_coverage=1 00:36:14.571 --rc genhtml_legend=1 00:36:14.571 --rc geninfo_all_blocks=1 00:36:14.571 --rc geninfo_unexecuted_blocks=1 00:36:14.571 00:36:14.571 ' 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:14.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.571 --rc genhtml_branch_coverage=1 00:36:14.571 --rc genhtml_function_coverage=1 00:36:14.571 --rc genhtml_legend=1 00:36:14.571 --rc geninfo_all_blocks=1 00:36:14.571 --rc geninfo_unexecuted_blocks=1 00:36:14.571 00:36:14.571 ' 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:14.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.571 --rc genhtml_branch_coverage=1 00:36:14.571 --rc genhtml_function_coverage=1 00:36:14.571 --rc genhtml_legend=1 00:36:14.571 --rc geninfo_all_blocks=1 00:36:14.571 --rc geninfo_unexecuted_blocks=1 00:36:14.571 00:36:14.571 ' 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:14.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:14.571 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:14.572 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:14.572 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:14.572 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:14.572 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:14.572 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:14.572 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:14.572 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:14.572 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:14.572 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:14.572 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:14.572 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:14.572 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:36:14.572 02:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:17.100 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:17.100 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:36:17.100 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:17.100 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:17.100 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:17.100 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:17.100 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:17.100 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:36:17.100 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:17.100 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:36:17.100 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:36:17.100 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:36:17.100 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:36:17.100 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:36:17.100 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:36:17.100 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:17.101 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:17.101 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:17.101 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:17.101 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:17.101 02:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:17.101 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:17.101 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:17.101 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:17.101 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:17.101 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:17.101 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:17.101 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:17.101 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:17.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:17.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:36:17.101 00:36:17.101 --- 10.0.0.2 ping statistics --- 00:36:17.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:17.101 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:36:17.101 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:17.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:17.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:36:17.101 00:36:17.101 --- 10.0.0.1 ping statistics --- 00:36:17.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:17.101 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:36:17.101 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:17.101 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:36:17.101 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:17.101 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:17.101 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:17.101 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:17.101 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:17.101 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:17.102 ************************************ 00:36:17.102 START TEST nvmf_digest_clean 00:36:17.102 ************************************ 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3129278 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3129278 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3129278 ']' 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:17.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:17.102 02:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:17.102 [2024-11-17 02:56:25.367823] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:17.102 [2024-11-17 02:56:25.367978] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:17.102 [2024-11-17 02:56:25.538790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:17.360 [2024-11-17 02:56:25.677508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:17.360 [2024-11-17 02:56:25.677600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:17.360 [2024-11-17 02:56:25.677625] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:17.360 [2024-11-17 02:56:25.677650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:17.360 [2024-11-17 02:56:25.677670] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:17.360 [2024-11-17 02:56:25.679299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:17.926 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:17.926 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:17.926 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:17.926 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:17.926 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:17.926 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:17.926 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:36:17.926 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:36:17.926 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:36:17.926 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.926 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:18.493 null0 00:36:18.493 [2024-11-17 02:56:26.740706] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:18.493 [2024-11-17 02:56:26.765054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:18.493 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.493 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:36:18.493 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:18.493 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:18.493 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:18.493 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:18.493 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:18.493 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:18.493 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3129431 00:36:18.493 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:18.493 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3129431 /var/tmp/bperf.sock 00:36:18.493 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3129431 ']' 00:36:18.493 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:18.493 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:18.493 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:18.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:18.493 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:18.493 02:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:18.493 [2024-11-17 02:56:26.861369] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:18.493 [2024-11-17 02:56:26.861522] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3129431 ] 00:36:18.751 [2024-11-17 02:56:27.018207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.751 [2024-11-17 02:56:27.156212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:19.684 02:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:19.684 02:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:19.684 02:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:19.684 02:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:19.684 02:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:20.250 02:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:20.250 02:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:20.508 nvme0n1 00:36:20.508 02:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:20.508 02:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:20.508 Running I/O for 2 seconds... 00:36:22.818 13354.00 IOPS, 52.16 MiB/s [2024-11-17T01:56:31.278Z] 13501.50 IOPS, 52.74 MiB/s 00:36:22.818 Latency(us) 00:36:22.818 [2024-11-17T01:56:31.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:22.818 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:22.818 nvme0n1 : 2.01 13524.54 52.83 0.00 0.00 9449.67 4781.70 19709.35 00:36:22.818 [2024-11-17T01:56:31.278Z] =================================================================================================================== 00:36:22.818 [2024-11-17T01:56:31.278Z] Total : 13524.54 52.83 0.00 0.00 9449.67 4781.70 19709.35 00:36:22.818 { 00:36:22.818 "results": [ 00:36:22.818 { 00:36:22.818 "job": "nvme0n1", 00:36:22.818 "core_mask": "0x2", 00:36:22.818 "workload": "randread", 00:36:22.818 "status": "finished", 00:36:22.818 "queue_depth": 128, 00:36:22.818 "io_size": 4096, 00:36:22.818 "runtime": 2.011899, 00:36:22.818 "iops": 13524.535774410147, 00:36:22.818 "mibps": 52.83021786878964, 00:36:22.818 "io_failed": 0, 00:36:22.818 "io_timeout": 0, 00:36:22.818 "avg_latency_us": 9449.667748512937, 00:36:22.818 "min_latency_us": 4781.700740740741, 00:36:22.818 "max_latency_us": 19709.345185185186 00:36:22.818 } 00:36:22.818 ], 00:36:22.818 "core_count": 1 00:36:22.818 } 00:36:22.818 02:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:22.818 02:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:22.818 02:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:22.818 02:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:22.818 02:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:22.818 | select(.opcode=="crc32c") 00:36:22.818 | "\(.module_name) \(.executed)"' 00:36:22.818 02:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:22.818 02:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:22.818 02:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:22.818 02:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:22.818 02:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3129431 00:36:22.818 02:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3129431 ']' 00:36:22.818 02:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3129431 00:36:22.818 02:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:22.818 02:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:22.818 02:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3129431 00:36:22.818 02:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:22.818 02:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:22.818 02:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3129431' 00:36:22.818 killing process with pid 3129431 00:36:22.818 02:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3129431 00:36:22.818 Received shutdown signal, test time was about 2.000000 seconds 00:36:22.818 00:36:22.818 Latency(us) 00:36:22.818 [2024-11-17T01:56:31.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:22.818 [2024-11-17T01:56:31.278Z] =================================================================================================================== 00:36:22.818 [2024-11-17T01:56:31.278Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:22.818 02:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3129431 00:36:23.755 02:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:36:23.755 02:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:23.756 02:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:23.756 02:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:23.756 02:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:23.756 02:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:23.756 02:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:23.756 02:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3130102 00:36:23.756 02:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:23.756 02:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3130102 /var/tmp/bperf.sock 00:36:23.756 02:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3130102 ']' 00:36:23.756 02:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:23.756 02:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:23.756 02:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:23.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:23.756 02:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:23.756 02:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:24.014 [2024-11-17 02:56:32.251287] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:24.014 [2024-11-17 02:56:32.251416] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3130102 ] 00:36:24.014 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:24.014 Zero copy mechanism will not be used. 00:36:24.014 [2024-11-17 02:56:32.392177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:24.273 [2024-11-17 02:56:32.532621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:24.839 02:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:24.839 02:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:24.839 02:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:24.839 02:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:24.839 02:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:25.774 02:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:25.774 02:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:26.032 nvme0n1 00:36:26.032 02:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:26.032 02:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:26.032 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:26.032 Zero copy mechanism will not be used. 00:36:26.032 Running I/O for 2 seconds... 00:36:28.343 3530.00 IOPS, 441.25 MiB/s [2024-11-17T01:56:36.803Z] 3556.00 IOPS, 444.50 MiB/s 00:36:28.343 Latency(us) 00:36:28.343 [2024-11-17T01:56:36.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:28.343 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:28.343 nvme0n1 : 2.00 3557.15 444.64 0.00 0.00 4491.28 1784.04 6505.05 00:36:28.343 [2024-11-17T01:56:36.803Z] =================================================================================================================== 00:36:28.343 [2024-11-17T01:56:36.803Z] Total : 3557.15 444.64 0.00 0.00 4491.28 1784.04 6505.05 00:36:28.343 { 00:36:28.343 "results": [ 00:36:28.343 { 00:36:28.343 "job": "nvme0n1", 00:36:28.343 "core_mask": "0x2", 00:36:28.343 "workload": "randread", 00:36:28.343 "status": "finished", 00:36:28.343 "queue_depth": 16, 00:36:28.343 "io_size": 131072, 00:36:28.343 "runtime": 2.003849, 00:36:28.343 "iops": 3557.1542566331095, 00:36:28.343 "mibps": 444.6442820791387, 00:36:28.343 "io_failed": 0, 00:36:28.343 "io_timeout": 0, 00:36:28.343 "avg_latency_us": 4491.276684540881, 00:36:28.343 "min_latency_us": 1784.0355555555554, 00:36:28.343 "max_latency_us": 6505.054814814815 00:36:28.343 } 00:36:28.343 ], 00:36:28.343 "core_count": 1 00:36:28.343 } 00:36:28.343 02:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:28.343 02:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:28.343 02:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:28.343 02:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:28.343 | select(.opcode=="crc32c") 00:36:28.343 | "\(.module_name) \(.executed)"' 00:36:28.343 02:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:28.343 02:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:28.343 02:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:28.343 02:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:28.343 02:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:28.343 02:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3130102 00:36:28.343 02:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3130102 ']' 00:36:28.343 02:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3130102 00:36:28.343 02:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:28.343 02:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:28.343 02:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3130102 00:36:28.343 02:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:28.343 02:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:28.343 02:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3130102' 00:36:28.343 killing process with pid 3130102 00:36:28.343 02:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3130102 00:36:28.343 Received shutdown signal, test time was about 2.000000 seconds 00:36:28.343 00:36:28.343 Latency(us) 00:36:28.343 [2024-11-17T01:56:36.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:28.343 [2024-11-17T01:56:36.803Z] =================================================================================================================== 00:36:28.343 [2024-11-17T01:56:36.803Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:28.343 02:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3130102 00:36:29.278 02:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:29.278 02:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:29.278 02:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:29.278 02:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:29.278 02:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:29.278 02:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:29.278 02:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:29.278 02:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3130763 00:36:29.278 02:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3130763 /var/tmp/bperf.sock 00:36:29.278 02:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3130763 ']' 00:36:29.278 02:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:29.278 02:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:29.278 02:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:29.278 02:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:29.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:29.278 02:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:29.278 02:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:29.536 [2024-11-17 02:56:37.780925] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:29.536 [2024-11-17 02:56:37.781077] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3130763 ] 00:36:29.536 [2024-11-17 02:56:37.939843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:29.795 [2024-11-17 02:56:38.079778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:30.360 02:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:30.360 02:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:30.360 02:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:30.360 02:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:30.360 02:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:30.926 02:56:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:30.926 02:56:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:31.492 nvme0n1 00:36:31.492 02:56:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:31.492 02:56:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:31.492 Running I/O for 2 seconds... 00:36:33.885 15574.00 IOPS, 60.84 MiB/s [2024-11-17T01:56:42.345Z] 15210.50 IOPS, 59.42 MiB/s 00:36:33.885 Latency(us) 00:36:33.885 [2024-11-17T01:56:42.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:33.885 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:33.885 nvme0n1 : 2.01 15205.61 59.40 0.00 0.00 8394.46 3543.80 17087.91 00:36:33.885 [2024-11-17T01:56:42.345Z] =================================================================================================================== 00:36:33.885 [2024-11-17T01:56:42.345Z] Total : 15205.61 59.40 0.00 0.00 8394.46 3543.80 17087.91 00:36:33.885 { 00:36:33.885 "results": [ 00:36:33.885 { 00:36:33.885 "job": "nvme0n1", 00:36:33.885 "core_mask": "0x2", 00:36:33.885 "workload": "randwrite", 00:36:33.885 "status": "finished", 00:36:33.885 "queue_depth": 128, 00:36:33.885 "io_size": 4096, 00:36:33.885 "runtime": 2.008535, 00:36:33.885 "iops": 15205.610059072906, 00:36:33.885 "mibps": 59.39691429325354, 00:36:33.885 "io_failed": 0, 00:36:33.885 "io_timeout": 0, 00:36:33.885 "avg_latency_us": 8394.463273195595, 00:36:33.885 "min_latency_us": 3543.7985185185184, 00:36:33.885 "max_latency_us": 17087.905185185184 00:36:33.885 } 00:36:33.885 ], 00:36:33.885 "core_count": 1 00:36:33.885 } 00:36:33.885 02:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:33.885 02:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:33.885 02:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:33.885 02:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:33.885 | select(.opcode=="crc32c") 00:36:33.885 | "\(.module_name) \(.executed)"' 00:36:33.885 02:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:33.885 02:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:33.886 02:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:33.886 02:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:33.886 02:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:33.886 02:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3130763 00:36:33.886 02:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3130763 ']' 00:36:33.886 02:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3130763 00:36:33.886 02:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:33.886 02:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:33.886 02:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3130763 00:36:33.886 02:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:33.886 02:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:33.886 02:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3130763' 00:36:33.886 killing process with pid 3130763 00:36:33.886 02:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3130763 00:36:33.886 Received shutdown signal, test time was about 2.000000 seconds 00:36:33.886 00:36:33.886 Latency(us) 00:36:33.886 [2024-11-17T01:56:42.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:33.886 [2024-11-17T01:56:42.346Z] =================================================================================================================== 00:36:33.886 [2024-11-17T01:56:42.346Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:33.886 02:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3130763 00:36:34.822 02:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:34.822 02:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:34.822 02:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:34.822 02:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:34.822 02:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:34.822 02:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:34.822 02:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:34.822 02:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3131370 00:36:34.822 02:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3131370 /var/tmp/bperf.sock 00:36:34.822 02:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:34.822 02:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3131370 ']' 00:36:34.822 02:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:34.822 02:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:34.822 02:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:34.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:34.822 02:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:34.822 02:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:34.822 [2024-11-17 02:56:43.141287] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:34.822 [2024-11-17 02:56:43.141442] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3131370 ] 00:36:34.822 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:34.822 Zero copy mechanism will not be used. 00:36:35.080 [2024-11-17 02:56:43.284791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:35.080 [2024-11-17 02:56:43.420681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:36.014 02:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:36.014 02:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:36.014 02:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:36.014 02:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:36.014 02:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:36.580 02:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:36.580 02:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:36.838 nvme0n1 00:36:36.838 02:56:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:36.838 02:56:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:36.838 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:36.838 Zero copy mechanism will not be used. 00:36:36.838 Running I/O for 2 seconds... 00:36:39.147 4367.00 IOPS, 545.88 MiB/s [2024-11-17T01:56:47.607Z] 4384.00 IOPS, 548.00 MiB/s 00:36:39.147 Latency(us) 00:36:39.147 [2024-11-17T01:56:47.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.147 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:39.147 nvme0n1 : 2.01 4382.79 547.85 0.00 0.00 3639.26 2536.49 13883.92 00:36:39.147 [2024-11-17T01:56:47.607Z] =================================================================================================================== 00:36:39.147 [2024-11-17T01:56:47.607Z] Total : 4382.79 547.85 0.00 0.00 3639.26 2536.49 13883.92 00:36:39.147 { 00:36:39.147 "results": [ 00:36:39.147 { 00:36:39.147 "job": "nvme0n1", 00:36:39.147 "core_mask": "0x2", 00:36:39.147 "workload": "randwrite", 00:36:39.147 "status": "finished", 00:36:39.147 "queue_depth": 16, 00:36:39.147 "io_size": 131072, 00:36:39.147 "runtime": 2.005116, 00:36:39.147 "iops": 4382.788826182625, 00:36:39.147 "mibps": 547.8486032728281, 00:36:39.147 "io_failed": 0, 00:36:39.147 "io_timeout": 0, 00:36:39.147 "avg_latency_us": 3639.2648237495573, 00:36:39.147 "min_latency_us": 2536.485925925926, 00:36:39.147 "max_latency_us": 13883.922962962963 00:36:39.147 } 00:36:39.147 ], 00:36:39.147 "core_count": 1 00:36:39.147 } 00:36:39.147 02:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:39.147 02:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:39.147 02:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:39.147 02:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:39.147 | select(.opcode=="crc32c") 00:36:39.147 | "\(.module_name) \(.executed)"' 00:36:39.147 02:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:39.147 02:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:39.147 02:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:39.147 02:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:39.147 02:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:39.147 02:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3131370 00:36:39.147 02:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3131370 ']' 00:36:39.147 02:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3131370 00:36:39.147 02:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:39.147 02:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:39.147 02:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3131370 00:36:39.147 02:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:39.147 02:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:39.147 02:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3131370' 00:36:39.147 killing process with pid 3131370 00:36:39.147 02:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3131370 00:36:39.147 Received shutdown signal, test time was about 2.000000 seconds 00:36:39.147 00:36:39.147 Latency(us) 00:36:39.147 [2024-11-17T01:56:47.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.147 [2024-11-17T01:56:47.607Z] =================================================================================================================== 00:36:39.147 [2024-11-17T01:56:47.607Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:39.147 02:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3131370 00:36:40.083 02:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3129278 00:36:40.083 02:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3129278 ']' 00:36:40.083 02:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3129278 00:36:40.083 02:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:40.083 02:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:40.083 02:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3129278 00:36:40.083 02:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:40.083 02:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:40.083 02:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3129278' 00:36:40.083 killing process with pid 3129278 00:36:40.083 02:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3129278 00:36:40.083 02:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3129278 00:36:41.457 00:36:41.458 real 0m24.381s 00:36:41.458 user 0m47.263s 00:36:41.458 sys 0m4.745s 00:36:41.458 02:56:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:41.458 02:56:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:41.458 ************************************ 00:36:41.458 END TEST nvmf_digest_clean 00:36:41.458 ************************************ 00:36:41.458 02:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:41.458 02:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:41.458 02:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:41.458 02:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:41.458 ************************************ 00:36:41.458 START TEST nvmf_digest_error 00:36:41.458 ************************************ 00:36:41.458 02:56:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:36:41.458 02:56:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:41.458 02:56:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:41.458 02:56:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:41.458 02:56:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:41.458 02:56:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3132133 00:36:41.458 02:56:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:41.458 02:56:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3132133 00:36:41.458 02:56:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3132133 ']' 00:36:41.458 02:56:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:41.458 02:56:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:41.458 02:56:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:41.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:41.458 02:56:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:41.458 02:56:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:41.458 [2024-11-17 02:56:49.798473] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:41.458 [2024-11-17 02:56:49.798609] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:41.716 [2024-11-17 02:56:49.953197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.716 [2024-11-17 02:56:50.096424] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:41.716 [2024-11-17 02:56:50.096510] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:41.716 [2024-11-17 02:56:50.096535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:41.716 [2024-11-17 02:56:50.096561] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:41.716 [2024-11-17 02:56:50.096580] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:41.716 [2024-11-17 02:56:50.098217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:42.650 02:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:42.650 02:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:42.650 02:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:42.650 02:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:42.650 02:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:42.650 02:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:42.650 02:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:42.650 02:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.650 02:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:42.650 [2024-11-17 02:56:50.772771] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:42.650 02:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.650 02:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:42.650 02:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:42.650 02:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.650 02:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:42.908 null0 00:36:42.908 [2024-11-17 02:56:51.165025] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:42.908 [2024-11-17 02:56:51.189364] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:42.908 02:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.908 02:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:42.908 02:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:42.908 02:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:42.908 02:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:42.908 02:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:42.908 02:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3132405 00:36:42.909 02:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:42.909 02:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3132405 /var/tmp/bperf.sock 00:36:42.909 02:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3132405 ']' 00:36:42.909 02:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:42.909 02:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:42.909 02:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:42.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:42.909 02:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:42.909 02:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:42.909 [2024-11-17 02:56:51.281151] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:42.909 [2024-11-17 02:56:51.281290] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3132405 ] 00:36:43.167 [2024-11-17 02:56:51.425561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:43.167 [2024-11-17 02:56:51.561520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:44.102 02:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:44.102 02:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:44.102 02:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:44.102 02:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:44.102 02:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:44.102 02:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.102 02:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:44.102 02:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.102 02:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:44.102 02:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:44.667 nvme0n1 00:36:44.667 02:56:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:44.667 02:56:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.667 02:56:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:44.667 02:56:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.667 02:56:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:44.667 02:56:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:44.926 Running I/O for 2 seconds... 00:36:44.926 [2024-11-17 02:56:53.159220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.926 [2024-11-17 02:56:53.159296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.926 [2024-11-17 02:56:53.159327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.926 [2024-11-17 02:56:53.179917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.926 [2024-11-17 02:56:53.179968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.926 [2024-11-17 02:56:53.179999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.926 [2024-11-17 02:56:53.197036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.926 [2024-11-17 02:56:53.197086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.926 [2024-11-17 02:56:53.197126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.926 [2024-11-17 02:56:53.215140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.926 [2024-11-17 02:56:53.215182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.926 [2024-11-17 02:56:53.215222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.926 [2024-11-17 02:56:53.233617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.926 [2024-11-17 02:56:53.233673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.926 [2024-11-17 02:56:53.233699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.926 [2024-11-17 02:56:53.251060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.926 [2024-11-17 02:56:53.251124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.926 [2024-11-17 02:56:53.251166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.926 [2024-11-17 02:56:53.268649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.926 [2024-11-17 02:56:53.268699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.926 [2024-11-17 02:56:53.268729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.926 [2024-11-17 02:56:53.290863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.926 [2024-11-17 02:56:53.290912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.926 [2024-11-17 02:56:53.290942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.926 [2024-11-17 02:56:53.309858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.926 [2024-11-17 02:56:53.309907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.926 [2024-11-17 02:56:53.309937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.926 [2024-11-17 02:56:53.332980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.926 [2024-11-17 02:56:53.333029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.926 [2024-11-17 02:56:53.333077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.926 [2024-11-17 02:56:53.354290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.926 [2024-11-17 02:56:53.354338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.926 [2024-11-17 02:56:53.354368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.926 [2024-11-17 02:56:53.370187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.926 [2024-11-17 02:56:53.370234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.926 [2024-11-17 02:56:53.370272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.185 [2024-11-17 02:56:53.389938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.185 [2024-11-17 02:56:53.389986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.185 [2024-11-17 02:56:53.390017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.185 [2024-11-17 02:56:53.409384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.185 [2024-11-17 02:56:53.409432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.185 [2024-11-17 02:56:53.409462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.185 [2024-11-17 02:56:53.426163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.185 [2024-11-17 02:56:53.426222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.185 [2024-11-17 02:56:53.426252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.185 [2024-11-17 02:56:53.446799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.185 [2024-11-17 02:56:53.446847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.185 [2024-11-17 02:56:53.446878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.185 [2024-11-17 02:56:53.465461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.185 [2024-11-17 02:56:53.465509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.185 [2024-11-17 02:56:53.465538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.185 [2024-11-17 02:56:53.481340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.185 [2024-11-17 02:56:53.481399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.185 [2024-11-17 02:56:53.481428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.185 [2024-11-17 02:56:53.500117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.185 [2024-11-17 02:56:53.500164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.185 [2024-11-17 02:56:53.500193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.185 [2024-11-17 02:56:53.521904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.185 [2024-11-17 02:56:53.521952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.185 [2024-11-17 02:56:53.521982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.185 [2024-11-17 02:56:53.542667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.185 [2024-11-17 02:56:53.542715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.185 [2024-11-17 02:56:53.542744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.185 [2024-11-17 02:56:53.563642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.185 [2024-11-17 02:56:53.563690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.185 [2024-11-17 02:56:53.563719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.185 [2024-11-17 02:56:53.582641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.185 [2024-11-17 02:56:53.582688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.185 [2024-11-17 02:56:53.582717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.185 [2024-11-17 02:56:53.597898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.185 [2024-11-17 02:56:53.597945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.185 [2024-11-17 02:56:53.597976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.185 [2024-11-17 02:56:53.618290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.185 [2024-11-17 02:56:53.618339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.185 [2024-11-17 02:56:53.618368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.186 [2024-11-17 02:56:53.637125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.186 [2024-11-17 02:56:53.637173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.186 [2024-11-17 02:56:53.637202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.444 [2024-11-17 02:56:53.659944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.444 [2024-11-17 02:56:53.659994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.444 [2024-11-17 02:56:53.660025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.444 [2024-11-17 02:56:53.680802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.444 [2024-11-17 02:56:53.680851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.444 [2024-11-17 02:56:53.680882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.444 [2024-11-17 02:56:53.696511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.444 [2024-11-17 02:56:53.696559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.444 [2024-11-17 02:56:53.696599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.444 [2024-11-17 02:56:53.719694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.444 [2024-11-17 02:56:53.719743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.444 [2024-11-17 02:56:53.719772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.444 [2024-11-17 02:56:53.741088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.444 [2024-11-17 02:56:53.741147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.444 [2024-11-17 02:56:53.741177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.444 [2024-11-17 02:56:53.762680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.444 [2024-11-17 02:56:53.762728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.444 [2024-11-17 02:56:53.762758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.444 [2024-11-17 02:56:53.779086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.444 [2024-11-17 02:56:53.779143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.444 [2024-11-17 02:56:53.779173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.444 [2024-11-17 02:56:53.796640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.444 [2024-11-17 02:56:53.796687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.444 [2024-11-17 02:56:53.796716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.444 [2024-11-17 02:56:53.814165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.444 [2024-11-17 02:56:53.814212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.444 [2024-11-17 02:56:53.814242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.444 [2024-11-17 02:56:53.831175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.444 [2024-11-17 02:56:53.831222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.444 [2024-11-17 02:56:53.831251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.444 [2024-11-17 02:56:53.849339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.444 [2024-11-17 02:56:53.849386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.444 [2024-11-17 02:56:53.849415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.444 [2024-11-17 02:56:53.867327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.444 [2024-11-17 02:56:53.867375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.444 [2024-11-17 02:56:53.867405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.444 [2024-11-17 02:56:53.887482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.444 [2024-11-17 02:56:53.887530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.444 [2024-11-17 02:56:53.887559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.444 [2024-11-17 02:56:53.901686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.444 [2024-11-17 02:56:53.901734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.444 [2024-11-17 02:56:53.901763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.703 [2024-11-17 02:56:53.921910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.703 [2024-11-17 02:56:53.921958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.703 [2024-11-17 02:56:53.921988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.703 [2024-11-17 02:56:53.942039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.703 [2024-11-17 02:56:53.942088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.703 [2024-11-17 02:56:53.942132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.703 [2024-11-17 02:56:53.958853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.703 [2024-11-17 02:56:53.958901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.703 [2024-11-17 02:56:53.958931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.703 [2024-11-17 02:56:53.976607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.703 [2024-11-17 02:56:53.976655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.703 [2024-11-17 02:56:53.976686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.703 [2024-11-17 02:56:53.996656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.703 [2024-11-17 02:56:53.996704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.703 [2024-11-17 02:56:53.996733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.703 [2024-11-17 02:56:54.014723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.703 [2024-11-17 02:56:54.014770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.703 [2024-11-17 02:56:54.014810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.703 [2024-11-17 02:56:54.039214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.703 [2024-11-17 02:56:54.039262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.703 [2024-11-17 02:56:54.039292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.703 [2024-11-17 02:56:54.057687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.703 [2024-11-17 02:56:54.057734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.703 [2024-11-17 02:56:54.057763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.703 [2024-11-17 02:56:54.080477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.703 [2024-11-17 02:56:54.080524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.703 [2024-11-17 02:56:54.080554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.703 [2024-11-17 02:56:54.096543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.703 [2024-11-17 02:56:54.096590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.703 [2024-11-17 02:56:54.096620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.703 [2024-11-17 02:56:54.113688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.703 [2024-11-17 02:56:54.113736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.703 [2024-11-17 02:56:54.113766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.703 [2024-11-17 02:56:54.131537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.703 [2024-11-17 02:56:54.131585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.703 [2024-11-17 02:56:54.131616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.703 13321.00 IOPS, 52.04 MiB/s [2024-11-17T01:56:54.163Z] [2024-11-17 02:56:54.152664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.703 [2024-11-17 02:56:54.152729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.703 [2024-11-17 02:56:54.152759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.962 [2024-11-17 02:56:54.176018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.962 [2024-11-17 02:56:54.176069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.962 [2024-11-17 02:56:54.176109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.962 [2024-11-17 02:56:54.191838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.962 [2024-11-17 02:56:54.191888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.962 [2024-11-17 02:56:54.191917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.962 [2024-11-17 02:56:54.214111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.962 [2024-11-17 02:56:54.214159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.962 [2024-11-17 02:56:54.214189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.962 [2024-11-17 02:56:54.233958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.962 [2024-11-17 02:56:54.234007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.962 [2024-11-17 02:56:54.234037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.962 [2024-11-17 02:56:54.253172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.962 [2024-11-17 02:56:54.253219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.962 [2024-11-17 02:56:54.253249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.962 [2024-11-17 02:56:54.272738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.962 [2024-11-17 02:56:54.272787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.962 [2024-11-17 02:56:54.272817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.962 [2024-11-17 02:56:54.291768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.962 [2024-11-17 02:56:54.291817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.962 [2024-11-17 02:56:54.291846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.962 [2024-11-17 02:56:54.310846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.962 [2024-11-17 02:56:54.310893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.962 [2024-11-17 02:56:54.310923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.962 [2024-11-17 02:56:54.327520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.962 [2024-11-17 02:56:54.327568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.962 [2024-11-17 02:56:54.327597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.962 [2024-11-17 02:56:54.345759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.962 [2024-11-17 02:56:54.345808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.962 [2024-11-17 02:56:54.345846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.962 [2024-11-17 02:56:54.368732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.962 [2024-11-17 02:56:54.368781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.962 [2024-11-17 02:56:54.368811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.962 [2024-11-17 02:56:54.389777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.962 [2024-11-17 02:56:54.389825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.962 [2024-11-17 02:56:54.389854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.962 [2024-11-17 02:56:54.407739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.963 [2024-11-17 02:56:54.407786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.963 [2024-11-17 02:56:54.407816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.221 [2024-11-17 02:56:54.424587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.221 [2024-11-17 02:56:54.424638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.221 [2024-11-17 02:56:54.424677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.221 [2024-11-17 02:56:54.442609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.221 [2024-11-17 02:56:54.442658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.221 [2024-11-17 02:56:54.442688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.221 [2024-11-17 02:56:54.460650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.221 [2024-11-17 02:56:54.460699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.221 [2024-11-17 02:56:54.460729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.221 [2024-11-17 02:56:54.479910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.221 [2024-11-17 02:56:54.479959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.221 [2024-11-17 02:56:54.479989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.221 [2024-11-17 02:56:54.500238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.221 [2024-11-17 02:56:54.500296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.221 [2024-11-17 02:56:54.500326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.221 [2024-11-17 02:56:54.515128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.221 [2024-11-17 02:56:54.515183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.221 [2024-11-17 02:56:54.515213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.221 [2024-11-17 02:56:54.537208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.221 [2024-11-17 02:56:54.537256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.221 [2024-11-17 02:56:54.537285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.221 [2024-11-17 02:56:54.557686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.221 [2024-11-17 02:56:54.557734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.221 [2024-11-17 02:56:54.557765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.221 [2024-11-17 02:56:54.575590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.221 [2024-11-17 02:56:54.575638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.221 [2024-11-17 02:56:54.575667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.221 [2024-11-17 02:56:54.591624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.221 [2024-11-17 02:56:54.591672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.222 [2024-11-17 02:56:54.591702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.222 [2024-11-17 02:56:54.614409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.222 [2024-11-17 02:56:54.614457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.222 [2024-11-17 02:56:54.614486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.222 [2024-11-17 02:56:54.630144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.222 [2024-11-17 02:56:54.630191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.222 [2024-11-17 02:56:54.630222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.222 [2024-11-17 02:56:54.651805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.222 [2024-11-17 02:56:54.651853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.222 [2024-11-17 02:56:54.651882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.222 [2024-11-17 02:56:54.675835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.222 [2024-11-17 02:56:54.675885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.222 [2024-11-17 02:56:54.675924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.480 [2024-11-17 02:56:54.696201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.480 [2024-11-17 02:56:54.696254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.480 [2024-11-17 02:56:54.696284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.480 [2024-11-17 02:56:54.715896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.480 [2024-11-17 02:56:54.715947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.480 [2024-11-17 02:56:54.715978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.480 [2024-11-17 02:56:54.735389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.480 [2024-11-17 02:56:54.735439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.480 [2024-11-17 02:56:54.735470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.480 [2024-11-17 02:56:54.752299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.480 [2024-11-17 02:56:54.752357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.480 [2024-11-17 02:56:54.752385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.480 [2024-11-17 02:56:54.768946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.480 [2024-11-17 02:56:54.768990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.480 [2024-11-17 02:56:54.769017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.480 [2024-11-17 02:56:54.785456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.480 [2024-11-17 02:56:54.785500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.480 [2024-11-17 02:56:54.785527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.480 [2024-11-17 02:56:54.804981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.480 [2024-11-17 02:56:54.805026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.480 [2024-11-17 02:56:54.805053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.480 [2024-11-17 02:56:54.823037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.480 [2024-11-17 02:56:54.823082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.480 [2024-11-17 02:56:54.823117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.481 [2024-11-17 02:56:54.844294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.481 [2024-11-17 02:56:54.844339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.481 [2024-11-17 02:56:54.844366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.481 [2024-11-17 02:56:54.860647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.481 [2024-11-17 02:56:54.860702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.481 [2024-11-17 02:56:54.860728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.481 [2024-11-17 02:56:54.875648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.481 [2024-11-17 02:56:54.875705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.481 [2024-11-17 02:56:54.875731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.481 [2024-11-17 02:56:54.893370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.481 [2024-11-17 02:56:54.893426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.481 [2024-11-17 02:56:54.893450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.481 [2024-11-17 02:56:54.911865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.481 [2024-11-17 02:56:54.911920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.481 [2024-11-17 02:56:54.911946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.481 [2024-11-17 02:56:54.928581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.481 [2024-11-17 02:56:54.928635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.481 [2024-11-17 02:56:54.928662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.739 [2024-11-17 02:56:54.943853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.739 [2024-11-17 02:56:54.943909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.739 [2024-11-17 02:56:54.943936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.739 [2024-11-17 02:56:54.962431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.739 [2024-11-17 02:56:54.962475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.739 [2024-11-17 02:56:54.962502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.739 [2024-11-17 02:56:54.979192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.739 [2024-11-17 02:56:54.979250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.739 [2024-11-17 02:56:54.979289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.739 [2024-11-17 02:56:54.998487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.739 [2024-11-17 02:56:54.998544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.739 [2024-11-17 02:56:54.998570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.739 [2024-11-17 02:56:55.016671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.739 [2024-11-17 02:56:55.016727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.739 [2024-11-17 02:56:55.016754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.739 [2024-11-17 02:56:55.032945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.739 [2024-11-17 02:56:55.032989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.739 [2024-11-17 02:56:55.033016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.739 [2024-11-17 02:56:55.049656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.740 [2024-11-17 02:56:55.049700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.740 [2024-11-17 02:56:55.049726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.740 [2024-11-17 02:56:55.064632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.740 [2024-11-17 02:56:55.064687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.740 [2024-11-17 02:56:55.064713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.740 [2024-11-17 02:56:55.084632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.740 [2024-11-17 02:56:55.084677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.740 [2024-11-17 02:56:55.084704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.740 [2024-11-17 02:56:55.101755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.740 [2024-11-17 02:56:55.101811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.740 [2024-11-17 02:56:55.101837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.740 [2024-11-17 02:56:55.118471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.740 [2024-11-17 02:56:55.118527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.740 [2024-11-17 02:56:55.118553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.740 [2024-11-17 02:56:55.135154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.740 [2024-11-17 02:56:55.135209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.740 [2024-11-17 02:56:55.135237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.740 13543.00 IOPS, 52.90 MiB/s 00:36:46.740 Latency(us) 00:36:46.740 [2024-11-17T01:56:55.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:46.740 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:46.740 nvme0n1 : 2.01 13567.71 53.00 0.00 0.00 9420.85 4854.52 31457.28 00:36:46.740 [2024-11-17T01:56:55.200Z] =================================================================================================================== 00:36:46.740 [2024-11-17T01:56:55.200Z] Total : 13567.71 53.00 0.00 0.00 9420.85 4854.52 31457.28 00:36:46.740 { 00:36:46.740 "results": [ 00:36:46.740 { 00:36:46.740 "job": "nvme0n1", 00:36:46.740 "core_mask": "0x2", 00:36:46.740 "workload": "randread", 00:36:46.740 "status": "finished", 00:36:46.740 "queue_depth": 128, 00:36:46.740 "io_size": 4096, 00:36:46.740 "runtime": 2.005791, 00:36:46.740 "iops": 13567.71468213787, 00:36:46.740 "mibps": 52.998885477101055, 00:36:46.740 "io_failed": 0, 00:36:46.740 "io_timeout": 0, 00:36:46.740 "avg_latency_us": 9420.847940466372, 00:36:46.740 "min_latency_us": 4854.518518518518, 00:36:46.740 "max_latency_us": 31457.28 00:36:46.740 } 00:36:46.740 ], 00:36:46.740 "core_count": 1 00:36:46.740 } 00:36:46.740 02:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:46.740 02:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:46.740 02:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:46.740 | .driver_specific 00:36:46.740 | .nvme_error 00:36:46.740 | .status_code 00:36:46.740 | .command_transient_transport_error' 00:36:46.740 02:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:46.999 02:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 106 > 0 )) 00:36:46.999 02:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3132405 00:36:46.999 02:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3132405 ']' 00:36:46.999 02:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3132405 00:36:46.999 02:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:46.999 02:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:46.999 02:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3132405 00:36:47.257 02:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:47.257 02:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:47.257 02:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3132405' 00:36:47.257 killing process with pid 3132405 00:36:47.257 02:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3132405 00:36:47.257 Received shutdown signal, test time was about 2.000000 seconds 00:36:47.257 00:36:47.257 Latency(us) 00:36:47.257 [2024-11-17T01:56:55.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:47.257 [2024-11-17T01:56:55.717Z] =================================================================================================================== 00:36:47.257 [2024-11-17T01:56:55.717Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:47.257 02:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3132405 00:36:48.192 02:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:48.192 02:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:48.192 02:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:48.192 02:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:48.192 02:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:48.192 02:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3132949 00:36:48.192 02:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:48.192 02:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3132949 /var/tmp/bperf.sock 00:36:48.192 02:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3132949 ']' 00:36:48.192 02:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:48.192 02:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:48.192 02:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:48.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:48.192 02:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:48.192 02:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:48.192 [2024-11-17 02:56:56.396691] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:48.192 [2024-11-17 02:56:56.396852] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3132949 ] 00:36:48.192 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:48.192 Zero copy mechanism will not be used. 00:36:48.192 [2024-11-17 02:56:56.549339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:48.451 [2024-11-17 02:56:56.686300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:49.017 02:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:49.017 02:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:49.017 02:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:49.017 02:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:49.275 02:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:49.275 02:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.275 02:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:49.275 02:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.275 02:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:49.275 02:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:49.534 nvme0n1 00:36:49.534 02:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:49.534 02:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.534 02:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:49.534 02:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.534 02:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:49.534 02:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:49.792 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:49.792 Zero copy mechanism will not be used. 00:36:49.792 Running I/O for 2 seconds... 00:36:49.792 [2024-11-17 02:56:58.089759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.792 [2024-11-17 02:56:58.089846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.792 [2024-11-17 02:56:58.089877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:49.792 [2024-11-17 02:56:58.096329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.792 [2024-11-17 02:56:58.096385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.792 [2024-11-17 02:56:58.096411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:49.792 [2024-11-17 02:56:58.102481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.792 [2024-11-17 02:56:58.102543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.792 [2024-11-17 02:56:58.102573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:49.792 [2024-11-17 02:56:58.108553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.792 [2024-11-17 02:56:58.108595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.792 [2024-11-17 02:56:58.108621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:49.792 [2024-11-17 02:56:58.114223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.792 [2024-11-17 02:56:58.114266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.792 [2024-11-17 02:56:58.114292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:49.792 [2024-11-17 02:56:58.118145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.792 [2024-11-17 02:56:58.118199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.792 [2024-11-17 02:56:58.118226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:49.792 [2024-11-17 02:56:58.123975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.792 [2024-11-17 02:56:58.124029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.792 [2024-11-17 02:56:58.124066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:49.792 [2024-11-17 02:56:58.130108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.792 [2024-11-17 02:56:58.130163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.792 [2024-11-17 02:56:58.130191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:49.792 [2024-11-17 02:56:58.136143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.792 [2024-11-17 02:56:58.136197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.792 [2024-11-17 02:56:58.136222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:49.792 [2024-11-17 02:56:58.142244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.792 [2024-11-17 02:56:58.142301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.792 [2024-11-17 02:56:58.142345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:49.792 [2024-11-17 02:56:58.149678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.792 [2024-11-17 02:56:58.149732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.792 [2024-11-17 02:56:58.149758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:49.792 [2024-11-17 02:56:58.158182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.792 [2024-11-17 02:56:58.158238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.792 [2024-11-17 02:56:58.158265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:49.792 [2024-11-17 02:56:58.165297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.792 [2024-11-17 02:56:58.165355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.792 [2024-11-17 02:56:58.165389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:49.792 [2024-11-17 02:56:58.171456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.792 [2024-11-17 02:56:58.171498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.792 [2024-11-17 02:56:58.171526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:49.792 [2024-11-17 02:56:58.178747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.792 [2024-11-17 02:56:58.178790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.792 [2024-11-17 02:56:58.178817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:49.792 [2024-11-17 02:56:58.184311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.792 [2024-11-17 02:56:58.184354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.792 [2024-11-17 02:56:58.184380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:49.792 [2024-11-17 02:56:58.189737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.792 [2024-11-17 02:56:58.189780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.792 [2024-11-17 02:56:58.189806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:49.792 [2024-11-17 02:56:58.196424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.792 [2024-11-17 02:56:58.196467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.793 [2024-11-17 02:56:58.196493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:49.793 [2024-11-17 02:56:58.202037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.793 [2024-11-17 02:56:58.202079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.793 [2024-11-17 02:56:58.202113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:49.793 [2024-11-17 02:56:58.206047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.793 [2024-11-17 02:56:58.206088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.793 [2024-11-17 02:56:58.206124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:49.793 [2024-11-17 02:56:58.212175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.793 [2024-11-17 02:56:58.212218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.793 [2024-11-17 02:56:58.212244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:49.793 [2024-11-17 02:56:58.218246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.793 [2024-11-17 02:56:58.218290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.793 [2024-11-17 02:56:58.218317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:49.793 [2024-11-17 02:56:58.222849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.793 [2024-11-17 02:56:58.222902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.793 [2024-11-17 02:56:58.222928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:49.793 [2024-11-17 02:56:58.229464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.793 [2024-11-17 02:56:58.229519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.793 [2024-11-17 02:56:58.229552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:49.793 [2024-11-17 02:56:58.236200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.793 [2024-11-17 02:56:58.236255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.793 [2024-11-17 02:56:58.236280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:49.793 [2024-11-17 02:56:58.242777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.793 [2024-11-17 02:56:58.242831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.793 [2024-11-17 02:56:58.242856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:49.793 [2024-11-17 02:56:58.248870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.793 [2024-11-17 02:56:58.248919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.793 [2024-11-17 02:56:58.248949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.052 [2024-11-17 02:56:58.255603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.052 [2024-11-17 02:56:58.255658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.052 [2024-11-17 02:56:58.255684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.052 [2024-11-17 02:56:58.262217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.052 [2024-11-17 02:56:58.262272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.052 [2024-11-17 02:56:58.262298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.052 [2024-11-17 02:56:58.268788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.052 [2024-11-17 02:56:58.268845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.052 [2024-11-17 02:56:58.268872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.052 [2024-11-17 02:56:58.274905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.052 [2024-11-17 02:56:58.274959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.052 [2024-11-17 02:56:58.274986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.052 [2024-11-17 02:56:58.280764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.052 [2024-11-17 02:56:58.280819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.280844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.286812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.286882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.286922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.293242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.293304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.293333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.300204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.300260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.300287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.307093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.307149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.307179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.314073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.314132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.314177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.320844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.320901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.320928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.328003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.328048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.328075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.334948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.335019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.335045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.342003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.342053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.342092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.349972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.350032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.350063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.357105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.357167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.357193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.364217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.364278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.364305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.370764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.370822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.370863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.377869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.377927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.377952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.386275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.386318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.386344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.394121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.394166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.394193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.400928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.400970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.400994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.407486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.407551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.407578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.414060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.414126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.414153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.422094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.422166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.422193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.430067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.430141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.430185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.436936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.436984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.437014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.443569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.443617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.053 [2024-11-17 02:56:58.443646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.053 [2024-11-17 02:56:58.450198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.053 [2024-11-17 02:56:58.450241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.054 [2024-11-17 02:56:58.450266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.054 [2024-11-17 02:56:58.458322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.054 [2024-11-17 02:56:58.458365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.054 [2024-11-17 02:56:58.458392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.054 [2024-11-17 02:56:58.466331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.054 [2024-11-17 02:56:58.466375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.054 [2024-11-17 02:56:58.466425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.054 [2024-11-17 02:56:58.473298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.054 [2024-11-17 02:56:58.473357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.054 [2024-11-17 02:56:58.473384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.054 [2024-11-17 02:56:58.479950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.054 [2024-11-17 02:56:58.480006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.054 [2024-11-17 02:56:58.480031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.054 [2024-11-17 02:56:58.486479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.054 [2024-11-17 02:56:58.486535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.054 [2024-11-17 02:56:58.486562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.054 [2024-11-17 02:56:58.494246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.054 [2024-11-17 02:56:58.494290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.054 [2024-11-17 02:56:58.494331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.054 [2024-11-17 02:56:58.502926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.054 [2024-11-17 02:56:58.502976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.054 [2024-11-17 02:56:58.503006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.054 [2024-11-17 02:56:58.511389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.054 [2024-11-17 02:56:58.511437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.054 [2024-11-17 02:56:58.511464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.314 [2024-11-17 02:56:58.520669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.314 [2024-11-17 02:56:58.520735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.314 [2024-11-17 02:56:58.520765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.314 [2024-11-17 02:56:58.528433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.314 [2024-11-17 02:56:58.528490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.314 [2024-11-17 02:56:58.528514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.314 [2024-11-17 02:56:58.532982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.314 [2024-11-17 02:56:58.533033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.314 [2024-11-17 02:56:58.533060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.314 [2024-11-17 02:56:58.536810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.314 [2024-11-17 02:56:58.536852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.314 [2024-11-17 02:56:58.536878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.314 [2024-11-17 02:56:58.541889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.314 [2024-11-17 02:56:58.541930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.314 [2024-11-17 02:56:58.541956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.314 [2024-11-17 02:56:58.545728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.314 [2024-11-17 02:56:58.545770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.314 [2024-11-17 02:56:58.545797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.314 [2024-11-17 02:56:58.550947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.314 [2024-11-17 02:56:58.550990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.314 [2024-11-17 02:56:58.551016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.314 [2024-11-17 02:56:58.556222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.314 [2024-11-17 02:56:58.556265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.314 [2024-11-17 02:56:58.556291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.314 [2024-11-17 02:56:58.561209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.314 [2024-11-17 02:56:58.561266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.314 [2024-11-17 02:56:58.561293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.314 [2024-11-17 02:56:58.568296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.314 [2024-11-17 02:56:58.568340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.314 [2024-11-17 02:56:58.568367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.314 [2024-11-17 02:56:58.575929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.314 [2024-11-17 02:56:58.575986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.314 [2024-11-17 02:56:58.576024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.314 [2024-11-17 02:56:58.584160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.314 [2024-11-17 02:56:58.584203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.314 [2024-11-17 02:56:58.584230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.314 [2024-11-17 02:56:58.591463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.314 [2024-11-17 02:56:58.591507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.314 [2024-11-17 02:56:58.591534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.314 [2024-11-17 02:56:58.598431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.314 [2024-11-17 02:56:58.598488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.314 [2024-11-17 02:56:58.598515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.314 [2024-11-17 02:56:58.607050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.314 [2024-11-17 02:56:58.607103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.314 [2024-11-17 02:56:58.607132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.314 [2024-11-17 02:56:58.613674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.314 [2024-11-17 02:56:58.613720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.314 [2024-11-17 02:56:58.613747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.619977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.620021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.620048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.626463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.626527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.626556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.631162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.631205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.631231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.636312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.636380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.636408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.642471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.642515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.642542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.647017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.647059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.647085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.652887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.652930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.652958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.660177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.660220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.660246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.668016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.668071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.668104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.674227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.674270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.674297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.678732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.678785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.678810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.684977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.685034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.685061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.690614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.690673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.690698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.696478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.696535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.696560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.703153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.703197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.703223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.710985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.711033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.711062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.719198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.719242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.719268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.726031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.726087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.726123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.732909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.732965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.732991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.739634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.739690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.739717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.745500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.745568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.745595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.751344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.751404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.751429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.757296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.757354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.757380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.762389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.315 [2024-11-17 02:56:58.762431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.315 [2024-11-17 02:56:58.762457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.315 [2024-11-17 02:56:58.766050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.316 [2024-11-17 02:56:58.766091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.316 [2024-11-17 02:56:58.766127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.316 [2024-11-17 02:56:58.771532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.316 [2024-11-17 02:56:58.771577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.316 [2024-11-17 02:56:58.771604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.778366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.778431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.778460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.785457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.785515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.785543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.790724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.790767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.790793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.796003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.796047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.796075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.803289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.803346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.803373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.811148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.811209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.811253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.820112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.820176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.820221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.829053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.829128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.829174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.837846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.837896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.837926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.846484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.846527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.846570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.855083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.855141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.855171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.864031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.864117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.864150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.872685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.872745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.872775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.881323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.881367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.881393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.889893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.889957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.889987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.898561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.898611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.898640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.907008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.907073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.907111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.915630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.915695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.915725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.924014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.924077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.924116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.932642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.932690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.932718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.941406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.941465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.941494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.948788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.948847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.948874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.955503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.955548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.955574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.962899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.962944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.962971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.970034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.970079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.575 [2024-11-17 02:56:58.970115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.575 [2024-11-17 02:56:58.976862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.575 [2024-11-17 02:56:58.976921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.576 [2024-11-17 02:56:58.976968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.576 [2024-11-17 02:56:58.984182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.576 [2024-11-17 02:56:58.984227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.576 [2024-11-17 02:56:58.984253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.576 [2024-11-17 02:56:58.991329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.576 [2024-11-17 02:56:58.991372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.576 [2024-11-17 02:56:58.991398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.576 [2024-11-17 02:56:58.998634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.576 [2024-11-17 02:56:58.998678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.576 [2024-11-17 02:56:58.998731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.576 [2024-11-17 02:56:59.006208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.576 [2024-11-17 02:56:59.006266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.576 [2024-11-17 02:56:59.006294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.576 [2024-11-17 02:56:59.013572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.576 [2024-11-17 02:56:59.013616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.576 [2024-11-17 02:56:59.013643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.576 [2024-11-17 02:56:59.020251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.576 [2024-11-17 02:56:59.020295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.576 [2024-11-17 02:56:59.020340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.576 [2024-11-17 02:56:59.027456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.576 [2024-11-17 02:56:59.027499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.576 [2024-11-17 02:56:59.027525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.836 [2024-11-17 02:56:59.035052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.836 [2024-11-17 02:56:59.035123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.836 [2024-11-17 02:56:59.035175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.836 [2024-11-17 02:56:59.042311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.836 [2024-11-17 02:56:59.042356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.836 [2024-11-17 02:56:59.042383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.836 [2024-11-17 02:56:59.049607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.836 [2024-11-17 02:56:59.049652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.836 [2024-11-17 02:56:59.049678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.836 [2024-11-17 02:56:59.056861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.836 [2024-11-17 02:56:59.056905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.836 [2024-11-17 02:56:59.056930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.836 [2024-11-17 02:56:59.063968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.836 [2024-11-17 02:56:59.064011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.836 [2024-11-17 02:56:59.064037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.836 [2024-11-17 02:56:59.071067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.836 [2024-11-17 02:56:59.071121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.836 [2024-11-17 02:56:59.071149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.836 [2024-11-17 02:56:59.078410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.836 [2024-11-17 02:56:59.078454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.836 [2024-11-17 02:56:59.078479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.836 4526.00 IOPS, 565.75 MiB/s [2024-11-17T01:56:59.296Z] [2024-11-17 02:56:59.086361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.836 [2024-11-17 02:56:59.086405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.836 [2024-11-17 02:56:59.086431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.836 [2024-11-17 02:56:59.093681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.836 [2024-11-17 02:56:59.093734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.836 [2024-11-17 02:56:59.093761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.836 [2024-11-17 02:56:59.100293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.836 [2024-11-17 02:56:59.100337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.836 [2024-11-17 02:56:59.100363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.836 [2024-11-17 02:56:59.108718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.836 [2024-11-17 02:56:59.108762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.836 [2024-11-17 02:56:59.108788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.836 [2024-11-17 02:56:59.118311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.836 [2024-11-17 02:56:59.118355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.836 [2024-11-17 02:56:59.118382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.836 [2024-11-17 02:56:59.126628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.836 [2024-11-17 02:56:59.126724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.836 [2024-11-17 02:56:59.126756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.836 [2024-11-17 02:56:59.135564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.836 [2024-11-17 02:56:59.135609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.836 [2024-11-17 02:56:59.135635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.836 [2024-11-17 02:56:59.143649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.836 [2024-11-17 02:56:59.143692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.836 [2024-11-17 02:56:59.143719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.836 [2024-11-17 02:56:59.147549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.836 [2024-11-17 02:56:59.147591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.836 [2024-11-17 02:56:59.147617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.836 [2024-11-17 02:56:59.153222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.836 [2024-11-17 02:56:59.153277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.836 [2024-11-17 02:56:59.153302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.836 [2024-11-17 02:56:59.159361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.836 [2024-11-17 02:56:59.159414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.836 [2024-11-17 02:56:59.159453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.836 [2024-11-17 02:56:59.165182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.836 [2024-11-17 02:56:59.165230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.836 [2024-11-17 02:56:59.165259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.836 [2024-11-17 02:56:59.171206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.836 [2024-11-17 02:56:59.171262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.837 [2024-11-17 02:56:59.171289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.837 [2024-11-17 02:56:59.177008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.837 [2024-11-17 02:56:59.177055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.837 [2024-11-17 02:56:59.177084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.837 [2024-11-17 02:56:59.182912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.837 [2024-11-17 02:56:59.182954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.837 [2024-11-17 02:56:59.182979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.837 [2024-11-17 02:56:59.188837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.837 [2024-11-17 02:56:59.188893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.837 [2024-11-17 02:56:59.188919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.837 [2024-11-17 02:56:59.194793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.837 [2024-11-17 02:56:59.194847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.837 [2024-11-17 02:56:59.194873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.837 [2024-11-17 02:56:59.201342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.837 [2024-11-17 02:56:59.201406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.837 [2024-11-17 02:56:59.201431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.837 [2024-11-17 02:56:59.207694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.837 [2024-11-17 02:56:59.207735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.837 [2024-11-17 02:56:59.207759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.837 [2024-11-17 02:56:59.213663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.837 [2024-11-17 02:56:59.213720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.837 [2024-11-17 02:56:59.213762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.837 [2024-11-17 02:56:59.219733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.837 [2024-11-17 02:56:59.219786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.837 [2024-11-17 02:56:59.219810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.837 [2024-11-17 02:56:59.225570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.837 [2024-11-17 02:56:59.225624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.837 [2024-11-17 02:56:59.225647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.837 [2024-11-17 02:56:59.231422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.837 [2024-11-17 02:56:59.231476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.837 [2024-11-17 02:56:59.231524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.837 [2024-11-17 02:56:59.237308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.837 [2024-11-17 02:56:59.237363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.837 [2024-11-17 02:56:59.237390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.837 [2024-11-17 02:56:59.243039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.837 [2024-11-17 02:56:59.243082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.837 [2024-11-17 02:56:59.243117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.837 [2024-11-17 02:56:59.248607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.837 [2024-11-17 02:56:59.248650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.837 [2024-11-17 02:56:59.248675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.837 [2024-11-17 02:56:59.252970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.837 [2024-11-17 02:56:59.253011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.837 [2024-11-17 02:56:59.253036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.837 [2024-11-17 02:56:59.257766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.837 [2024-11-17 02:56:59.257819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.837 [2024-11-17 02:56:59.257844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.837 [2024-11-17 02:56:59.263675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.837 [2024-11-17 02:56:59.263731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.837 [2024-11-17 02:56:59.263760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.837 [2024-11-17 02:56:59.269541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.837 [2024-11-17 02:56:59.269597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.837 [2024-11-17 02:56:59.269624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:50.837 [2024-11-17 02:56:59.276894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.837 [2024-11-17 02:56:59.276939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.837 [2024-11-17 02:56:59.276967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:50.837 [2024-11-17 02:56:59.282280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.837 [2024-11-17 02:56:59.282339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.837 [2024-11-17 02:56:59.282365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:50.837 [2024-11-17 02:56:59.287889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.837 [2024-11-17 02:56:59.287930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.837 [2024-11-17 02:56:59.287956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:50.837 [2024-11-17 02:56:59.294433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.837 [2024-11-17 02:56:59.294477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.837 [2024-11-17 02:56:59.294503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.299598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.299657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.097 [2024-11-17 02:56:59.299684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.304685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.304727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.097 [2024-11-17 02:56:59.304753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.311075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.311145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.097 [2024-11-17 02:56:59.311186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.318017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.318060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.097 [2024-11-17 02:56:59.318087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.325092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.325158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.097 [2024-11-17 02:56:59.325185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.331830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.331872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.097 [2024-11-17 02:56:59.331908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.338666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.338724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.097 [2024-11-17 02:56:59.338750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.345606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.345664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.097 [2024-11-17 02:56:59.345705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.352651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.352707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.097 [2024-11-17 02:56:59.352733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.359155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.359199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.097 [2024-11-17 02:56:59.359225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.366178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.366222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.097 [2024-11-17 02:56:59.366250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.373309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.373352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.097 [2024-11-17 02:56:59.373378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.381552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.381600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.097 [2024-11-17 02:56:59.381629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.389559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.389616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.097 [2024-11-17 02:56:59.389659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.396563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.396608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.097 [2024-11-17 02:56:59.396635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.403431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.403487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.097 [2024-11-17 02:56:59.403529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.409942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.409985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.097 [2024-11-17 02:56:59.410012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.416647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.416695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.097 [2024-11-17 02:56:59.416724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.424471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.424534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.097 [2024-11-17 02:56:59.424563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.432969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.433012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.097 [2024-11-17 02:56:59.433037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.439988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.440037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.097 [2024-11-17 02:56:59.440065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.446878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.446936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.097 [2024-11-17 02:56:59.446964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.097 [2024-11-17 02:56:59.453328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.097 [2024-11-17 02:56:59.453375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.098 [2024-11-17 02:56:59.453414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.098 [2024-11-17 02:56:59.461275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.098 [2024-11-17 02:56:59.461331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.098 [2024-11-17 02:56:59.461373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.098 [2024-11-17 02:56:59.469858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.098 [2024-11-17 02:56:59.469919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.098 [2024-11-17 02:56:59.469946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.098 [2024-11-17 02:56:59.477217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.098 [2024-11-17 02:56:59.477276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.098 [2024-11-17 02:56:59.477304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.098 [2024-11-17 02:56:59.484137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.098 [2024-11-17 02:56:59.484180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.098 [2024-11-17 02:56:59.484207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.098 [2024-11-17 02:56:59.490058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.098 [2024-11-17 02:56:59.490134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.098 [2024-11-17 02:56:59.490161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.098 [2024-11-17 02:56:59.496042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.098 [2024-11-17 02:56:59.496085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.098 [2024-11-17 02:56:59.496122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.098 [2024-11-17 02:56:59.501845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.098 [2024-11-17 02:56:59.501904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.098 [2024-11-17 02:56:59.501930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.098 [2024-11-17 02:56:59.507784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.098 [2024-11-17 02:56:59.507843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.098 [2024-11-17 02:56:59.507868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.098 [2024-11-17 02:56:59.513626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.098 [2024-11-17 02:56:59.513682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.098 [2024-11-17 02:56:59.513709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.098 [2024-11-17 02:56:59.520710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.098 [2024-11-17 02:56:59.520757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.098 [2024-11-17 02:56:59.520786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.098 [2024-11-17 02:56:59.529441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.098 [2024-11-17 02:56:59.529490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.098 [2024-11-17 02:56:59.529519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.098 [2024-11-17 02:56:59.536672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.098 [2024-11-17 02:56:59.536714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.098 [2024-11-17 02:56:59.536740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.098 [2024-11-17 02:56:59.543404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.098 [2024-11-17 02:56:59.543447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.098 [2024-11-17 02:56:59.543472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.098 [2024-11-17 02:56:59.550089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.098 [2024-11-17 02:56:59.550140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.098 [2024-11-17 02:56:59.550166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.556618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.556677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.556705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.563697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.563745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.563774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.572331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.572376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.572410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.580085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.580138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.580165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.588332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.588375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.588401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.596905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.596954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.596984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.605011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.605068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.605109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.614231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.614289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.614314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.622297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.622356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.622381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.628630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.628689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.628713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.634660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.634714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.634739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.640557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.640622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.640648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.646543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.646602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.646629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.652516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.652562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.652591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.658565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.658622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.658649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.664384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.664425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.664450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.670301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.670358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.670399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.674676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.674732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.674758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.679135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.679177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.679204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.683966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.684007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.684041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.688847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.688889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.688915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.692639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.692680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.692706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.698038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.698105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.698135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.704718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.358 [2024-11-17 02:56:59.704761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.358 [2024-11-17 02:56:59.704786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.358 [2024-11-17 02:56:59.711313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-17 02:56:59.711355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-17 02:56:59.711381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.359 [2024-11-17 02:56:59.717811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-17 02:56:59.717853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-17 02:56:59.717880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.359 [2024-11-17 02:56:59.723341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-17 02:56:59.723399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-17 02:56:59.723424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.359 [2024-11-17 02:56:59.730386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-17 02:56:59.730443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-17 02:56:59.730496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.359 [2024-11-17 02:56:59.738872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-17 02:56:59.738934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-17 02:56:59.738960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.359 [2024-11-17 02:56:59.746996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-17 02:56:59.747038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-17 02:56:59.747064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.359 [2024-11-17 02:56:59.756068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-17 02:56:59.756131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-17 02:56:59.756172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.359 [2024-11-17 02:56:59.763056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-17 02:56:59.763108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-17 02:56:59.763137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.359 [2024-11-17 02:56:59.767034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-17 02:56:59.767075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-17 02:56:59.767108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.359 [2024-11-17 02:56:59.771245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-17 02:56:59.771286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-17 02:56:59.771312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.359 [2024-11-17 02:56:59.775903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-17 02:56:59.775946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-17 02:56:59.775972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.359 [2024-11-17 02:56:59.779563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-17 02:56:59.779618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-17 02:56:59.779645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.359 [2024-11-17 02:56:59.784123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-17 02:56:59.784165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-17 02:56:59.784198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.359 [2024-11-17 02:56:59.788679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-17 02:56:59.788721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-17 02:56:59.788747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.359 [2024-11-17 02:56:59.793479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-17 02:56:59.793521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-17 02:56:59.793547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.359 [2024-11-17 02:56:59.800399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-17 02:56:59.800461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-17 02:56:59.800490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.359 [2024-11-17 02:56:59.808752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-17 02:56:59.808809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-17 02:56:59.808834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.359 [2024-11-17 02:56:59.815939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-17 02:56:59.815988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-17 02:56:59.816017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.618 [2024-11-17 02:56:59.821340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.618 [2024-11-17 02:56:59.821385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.618 [2024-11-17 02:56:59.821411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.618 [2024-11-17 02:56:59.825731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.618 [2024-11-17 02:56:59.825774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.618 [2024-11-17 02:56:59.825801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.618 [2024-11-17 02:56:59.831161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.618 [2024-11-17 02:56:59.831209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.618 [2024-11-17 02:56:59.831239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.618 [2024-11-17 02:56:59.837368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.837420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.837448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.619 [2024-11-17 02:56:59.842014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.842056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.842083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.619 [2024-11-17 02:56:59.849057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.849122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.849150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.619 [2024-11-17 02:56:59.856057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.856124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.856184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.619 [2024-11-17 02:56:59.863567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.863621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.863647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.619 [2024-11-17 02:56:59.870885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.870933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.870962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.619 [2024-11-17 02:56:59.878336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.878392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.878420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.619 [2024-11-17 02:56:59.885298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.885354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.885381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.619 [2024-11-17 02:56:59.891737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.891780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.891807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.619 [2024-11-17 02:56:59.896798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.896841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.896868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.619 [2024-11-17 02:56:59.900401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.900442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.900468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.619 [2024-11-17 02:56:59.906295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.906338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.906363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.619 [2024-11-17 02:56:59.912452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.912494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.912521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.619 [2024-11-17 02:56:59.917349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.917405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.917445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.619 [2024-11-17 02:56:59.925456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.925505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.925534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.619 [2024-11-17 02:56:59.933600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.933649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.933679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.619 [2024-11-17 02:56:59.940697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.940753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.940777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.619 [2024-11-17 02:56:59.947990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.948055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.948104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.619 [2024-11-17 02:56:59.955286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.955340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.955365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.619 [2024-11-17 02:56:59.962199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.962258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.962284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.619 [2024-11-17 02:56:59.967535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.967576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.967603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.619 [2024-11-17 02:56:59.971316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.971357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.971383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.619 [2024-11-17 02:56:59.976881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.619 [2024-11-17 02:56:59.976929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.619 [2024-11-17 02:56:59.976958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.620 [2024-11-17 02:56:59.983230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.620 [2024-11-17 02:56:59.983273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.620 [2024-11-17 02:56:59.983299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.620 [2024-11-17 02:56:59.987873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.620 [2024-11-17 02:56:59.987920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.620 [2024-11-17 02:56:59.987950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.620 [2024-11-17 02:56:59.993018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.620 [2024-11-17 02:56:59.993060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.620 [2024-11-17 02:56:59.993086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.620 [2024-11-17 02:56:59.997977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.620 [2024-11-17 02:56:59.998019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.620 [2024-11-17 02:56:59.998046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.620 [2024-11-17 02:57:00.004218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.620 [2024-11-17 02:57:00.004273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.620 [2024-11-17 02:57:00.004302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.620 [2024-11-17 02:57:00.011440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.620 [2024-11-17 02:57:00.011488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.620 [2024-11-17 02:57:00.011526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.620 [2024-11-17 02:57:00.019025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.620 [2024-11-17 02:57:00.019072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.620 [2024-11-17 02:57:00.019115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.620 [2024-11-17 02:57:00.027557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.620 [2024-11-17 02:57:00.027603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.620 [2024-11-17 02:57:00.027629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.620 [2024-11-17 02:57:00.035103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.620 [2024-11-17 02:57:00.035167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.620 [2024-11-17 02:57:00.035194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.620 [2024-11-17 02:57:00.042223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.620 [2024-11-17 02:57:00.042271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.620 [2024-11-17 02:57:00.042298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.620 [2024-11-17 02:57:00.048948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.620 [2024-11-17 02:57:00.048993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.620 [2024-11-17 02:57:00.049020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.620 [2024-11-17 02:57:00.055498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.620 [2024-11-17 02:57:00.055555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.620 [2024-11-17 02:57:00.055582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.620 [2024-11-17 02:57:00.062994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.620 [2024-11-17 02:57:00.063038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.620 [2024-11-17 02:57:00.063064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.620 [2024-11-17 02:57:00.071826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.620 [2024-11-17 02:57:00.071875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.620 [2024-11-17 02:57:00.071905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.879 [2024-11-17 02:57:00.079393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.879 [2024-11-17 02:57:00.079437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.879 [2024-11-17 02:57:00.079463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.879 4649.00 IOPS, 581.12 MiB/s [2024-11-17T01:57:00.339Z] [2024-11-17 02:57:00.088047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.879 [2024-11-17 02:57:00.088109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.879 [2024-11-17 02:57:00.088139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:51.879 00:36:51.879 Latency(us) 00:36:51.879 [2024-11-17T01:57:00.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:51.879 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:51.879 nvme0n1 : 2.00 4648.52 581.06 0.00 0.00 3434.75 922.36 13301.38 00:36:51.879 [2024-11-17T01:57:00.339Z] =================================================================================================================== 00:36:51.879 [2024-11-17T01:57:00.339Z] Total : 4648.52 581.06 0.00 0.00 3434.75 922.36 13301.38 00:36:51.879 { 00:36:51.879 "results": [ 00:36:51.879 { 00:36:51.879 "job": "nvme0n1", 00:36:51.879 "core_mask": "0x2", 00:36:51.879 "workload": "randread", 00:36:51.879 "status": "finished", 00:36:51.879 "queue_depth": 16, 00:36:51.879 "io_size": 131072, 00:36:51.879 "runtime": 2.003649, 00:36:51.879 "iops": 4648.518777490469, 00:36:51.879 "mibps": 581.0648471863086, 00:36:51.879 "io_failed": 0, 00:36:51.879 "io_timeout": 0, 00:36:51.879 "avg_latency_us": 3434.7542741711004, 00:36:51.879 "min_latency_us": 922.3585185185185, 00:36:51.879 "max_latency_us": 13301.38074074074 00:36:51.879 } 00:36:51.879 ], 00:36:51.879 "core_count": 1 00:36:51.879 } 00:36:51.879 02:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:51.879 02:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:51.879 02:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:51.879 | .driver_specific 00:36:51.879 | .nvme_error 00:36:51.879 | .status_code 00:36:51.879 | .command_transient_transport_error' 00:36:51.879 02:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:52.138 02:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 301 > 0 )) 00:36:52.138 02:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3132949 00:36:52.138 02:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3132949 ']' 00:36:52.138 02:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3132949 00:36:52.138 02:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:52.138 02:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:52.138 02:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3132949 00:36:52.138 02:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:52.138 02:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:52.138 02:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3132949' 00:36:52.138 killing process with pid 3132949 00:36:52.138 02:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3132949 00:36:52.138 Received shutdown signal, test time was about 2.000000 seconds 00:36:52.138 00:36:52.138 Latency(us) 00:36:52.138 [2024-11-17T01:57:00.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:52.138 [2024-11-17T01:57:00.598Z] =================================================================================================================== 00:36:52.138 [2024-11-17T01:57:00.598Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:52.138 02:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3132949 00:36:53.073 02:57:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:53.073 02:57:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:53.073 02:57:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:53.073 02:57:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:53.073 02:57:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:53.073 02:57:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3133670 00:36:53.073 02:57:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:53.073 02:57:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3133670 /var/tmp/bperf.sock 00:36:53.073 02:57:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3133670 ']' 00:36:53.073 02:57:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:53.073 02:57:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:53.073 02:57:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:53.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:53.073 02:57:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:53.073 02:57:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:53.073 [2024-11-17 02:57:01.428321] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:53.073 [2024-11-17 02:57:01.428480] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3133670 ] 00:36:53.332 [2024-11-17 02:57:01.570664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.332 [2024-11-17 02:57:01.700752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:54.266 02:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:54.266 02:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:54.266 02:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:54.266 02:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:54.266 02:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:54.266 02:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.266 02:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:54.266 02:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.266 02:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:54.266 02:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:54.832 nvme0n1 00:36:54.832 02:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:54.832 02:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.832 02:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:54.832 02:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.832 02:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:54.832 02:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:54.832 Running I/O for 2 seconds... 00:36:54.832 [2024-11-17 02:57:03.284735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beaab8 00:36:54.832 [2024-11-17 02:57:03.286835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.832 [2024-11-17 02:57:03.286896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.091 [2024-11-17 02:57:03.300977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5a90 00:36:55.091 [2024-11-17 02:57:03.303025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.091 [2024-11-17 02:57:03.303072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.091 [2024-11-17 02:57:03.317878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2948 00:36:55.091 [2024-11-17 02:57:03.319341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.091 [2024-11-17 02:57:03.319381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:55.091 [2024-11-17 02:57:03.333521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be99d8 00:36:55.091 [2024-11-17 02:57:03.334959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.091 [2024-11-17 02:57:03.335003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:55.091 [2024-11-17 02:57:03.354310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec840 00:36:55.091 [2024-11-17 02:57:03.356656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.091 [2024-11-17 02:57:03.356700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:55.091 [2024-11-17 02:57:03.366452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1868 00:36:55.091 [2024-11-17 02:57:03.367587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.091 [2024-11-17 02:57:03.367632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:55.091 [2024-11-17 02:57:03.386932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5be8 00:36:55.091 [2024-11-17 02:57:03.388959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.091 [2024-11-17 02:57:03.389004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:55.091 [2024-11-17 02:57:03.402441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be38d0 00:36:55.091 [2024-11-17 02:57:03.404333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.091 [2024-11-17 02:57:03.404388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:55.091 [2024-11-17 02:57:03.419383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4b08 00:36:55.091 [2024-11-17 02:57:03.421021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.091 [2024-11-17 02:57:03.421064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:55.091 [2024-11-17 02:57:03.436118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be95a0 00:36:55.091 [2024-11-17 02:57:03.437123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.091 [2024-11-17 02:57:03.437181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:55.091 [2024-11-17 02:57:03.452469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5378 00:36:55.091 [2024-11-17 02:57:03.453857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.091 [2024-11-17 02:57:03.453901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.091 [2024-11-17 02:57:03.469369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be7c50 00:36:55.091 [2024-11-17 02:57:03.471266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.091 [2024-11-17 02:57:03.471305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:55.091 [2024-11-17 02:57:03.484739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:36:55.091 [2024-11-17 02:57:03.486494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.091 [2024-11-17 02:57:03.486538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:55.091 [2024-11-17 02:57:03.501846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1ca0 00:36:55.091 [2024-11-17 02:57:03.503434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.091 [2024-11-17 02:57:03.503487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:55.091 [2024-11-17 02:57:03.518487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb048 00:36:55.091 [2024-11-17 02:57:03.519443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.091 [2024-11-17 02:57:03.519513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:55.091 [2024-11-17 02:57:03.537188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7970 00:36:55.091 [2024-11-17 02:57:03.539386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.091 [2024-11-17 02:57:03.539440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:55.350 [2024-11-17 02:57:03.553444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf81e0 00:36:55.350 [2024-11-17 02:57:03.555626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.350 [2024-11-17 02:57:03.555671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:55.350 [2024-11-17 02:57:03.569084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.350 [2024-11-17 02:57:03.569349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.350 [2024-11-17 02:57:03.569397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.350 [2024-11-17 02:57:03.587646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.350 [2024-11-17 02:57:03.587856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.350 [2024-11-17 02:57:03.587899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.350 [2024-11-17 02:57:03.606198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.350 [2024-11-17 02:57:03.606412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.350 [2024-11-17 02:57:03.606456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.350 [2024-11-17 02:57:03.624942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.350 [2024-11-17 02:57:03.625238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.350 [2024-11-17 02:57:03.625277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.350 [2024-11-17 02:57:03.643518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.350 [2024-11-17 02:57:03.643767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.350 [2024-11-17 02:57:03.643810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.350 [2024-11-17 02:57:03.661938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.350 [2024-11-17 02:57:03.662186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.350 [2024-11-17 02:57:03.662225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.350 [2024-11-17 02:57:03.680319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.350 [2024-11-17 02:57:03.680584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.350 [2024-11-17 02:57:03.680627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.350 [2024-11-17 02:57:03.698421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.350 [2024-11-17 02:57:03.698666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.350 [2024-11-17 02:57:03.698708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.350 [2024-11-17 02:57:03.716501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.350 [2024-11-17 02:57:03.716753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.350 [2024-11-17 02:57:03.716795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.350 [2024-11-17 02:57:03.734576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.350 [2024-11-17 02:57:03.734814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.350 [2024-11-17 02:57:03.734856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.350 [2024-11-17 02:57:03.752619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.350 [2024-11-17 02:57:03.752860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.350 [2024-11-17 02:57:03.752902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.351 [2024-11-17 02:57:03.770610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.351 [2024-11-17 02:57:03.770866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.351 [2024-11-17 02:57:03.770908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.351 [2024-11-17 02:57:03.788668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.351 [2024-11-17 02:57:03.788942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.351 [2024-11-17 02:57:03.788985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.351 [2024-11-17 02:57:03.807024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.351 [2024-11-17 02:57:03.807299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.351 [2024-11-17 02:57:03.807339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.609 [2024-11-17 02:57:03.825347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.609 [2024-11-17 02:57:03.825621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.609 [2024-11-17 02:57:03.825665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.609 [2024-11-17 02:57:03.843679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.609 [2024-11-17 02:57:03.843931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.609 [2024-11-17 02:57:03.843973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.609 [2024-11-17 02:57:03.861741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.609 [2024-11-17 02:57:03.861990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.609 [2024-11-17 02:57:03.862032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.609 [2024-11-17 02:57:03.879851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.609 [2024-11-17 02:57:03.880114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.609 [2024-11-17 02:57:03.880170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.609 [2024-11-17 02:57:03.898060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.609 [2024-11-17 02:57:03.898323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.609 [2024-11-17 02:57:03.898361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.609 [2024-11-17 02:57:03.916304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.609 [2024-11-17 02:57:03.916576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.609 [2024-11-17 02:57:03.916617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.609 [2024-11-17 02:57:03.934382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.609 [2024-11-17 02:57:03.934648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.609 [2024-11-17 02:57:03.934698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.609 [2024-11-17 02:57:03.952439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.609 [2024-11-17 02:57:03.952707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.609 [2024-11-17 02:57:03.952750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.609 [2024-11-17 02:57:03.970413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.609 [2024-11-17 02:57:03.970676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.609 [2024-11-17 02:57:03.970718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.609 [2024-11-17 02:57:03.988467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.609 [2024-11-17 02:57:03.988719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.609 [2024-11-17 02:57:03.988762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.609 [2024-11-17 02:57:04.006530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.610 [2024-11-17 02:57:04.006771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.610 [2024-11-17 02:57:04.006812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.610 [2024-11-17 02:57:04.024670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.610 [2024-11-17 02:57:04.024914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.610 [2024-11-17 02:57:04.024955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.610 [2024-11-17 02:57:04.042753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.610 [2024-11-17 02:57:04.043040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.610 [2024-11-17 02:57:04.043083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.610 [2024-11-17 02:57:04.061032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.610 [2024-11-17 02:57:04.061297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.610 [2024-11-17 02:57:04.061336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.869 [2024-11-17 02:57:04.079142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.869 [2024-11-17 02:57:04.079386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.869 [2024-11-17 02:57:04.079444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.869 [2024-11-17 02:57:04.097272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.869 [2024-11-17 02:57:04.097546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.869 [2024-11-17 02:57:04.097588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.869 [2024-11-17 02:57:04.115372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.869 [2024-11-17 02:57:04.115628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.869 [2024-11-17 02:57:04.115670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.869 [2024-11-17 02:57:04.133354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.869 [2024-11-17 02:57:04.133621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.869 [2024-11-17 02:57:04.133662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.869 [2024-11-17 02:57:04.151321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.869 [2024-11-17 02:57:04.151576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.869 [2024-11-17 02:57:04.151619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.869 [2024-11-17 02:57:04.169372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.869 [2024-11-17 02:57:04.169637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.869 [2024-11-17 02:57:04.169680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.869 [2024-11-17 02:57:04.187461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.869 [2024-11-17 02:57:04.187704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.869 [2024-11-17 02:57:04.187747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.869 [2024-11-17 02:57:04.205487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.869 [2024-11-17 02:57:04.205743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.869 [2024-11-17 02:57:04.205785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.869 [2024-11-17 02:57:04.223580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.869 [2024-11-17 02:57:04.223820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.869 [2024-11-17 02:57:04.223861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.869 [2024-11-17 02:57:04.241609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.869 [2024-11-17 02:57:04.241866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.869 [2024-11-17 02:57:04.241907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.869 [2024-11-17 02:57:04.259616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.869 [2024-11-17 02:57:04.259859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.869 [2024-11-17 02:57:04.259900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.869 14346.00 IOPS, 56.04 MiB/s [2024-11-17T01:57:04.329Z] [2024-11-17 02:57:04.277577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.869 [2024-11-17 02:57:04.277820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.869 [2024-11-17 02:57:04.277862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.869 [2024-11-17 02:57:04.295557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.869 [2024-11-17 02:57:04.295841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.869 [2024-11-17 02:57:04.295881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:55.869 [2024-11-17 02:57:04.313791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:55.869 [2024-11-17 02:57:04.314037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:55.869 [2024-11-17 02:57:04.314080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.128 [2024-11-17 02:57:04.331768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.128 [2024-11-17 02:57:04.332019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.128 [2024-11-17 02:57:04.332067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.128 [2024-11-17 02:57:04.349882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.128 [2024-11-17 02:57:04.350147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.128 [2024-11-17 02:57:04.350186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.128 [2024-11-17 02:57:04.367858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.128 [2024-11-17 02:57:04.368107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.128 [2024-11-17 02:57:04.368165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.128 [2024-11-17 02:57:04.386003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.128 [2024-11-17 02:57:04.386261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.128 [2024-11-17 02:57:04.386299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.128 [2024-11-17 02:57:04.403982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.128 [2024-11-17 02:57:04.404239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.128 [2024-11-17 02:57:04.404277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.128 [2024-11-17 02:57:04.422284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.128 [2024-11-17 02:57:04.422542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.128 [2024-11-17 02:57:04.422584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.128 [2024-11-17 02:57:04.440643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.128 [2024-11-17 02:57:04.440889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.128 [2024-11-17 02:57:04.440931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.128 [2024-11-17 02:57:04.458719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.128 [2024-11-17 02:57:04.458964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.128 [2024-11-17 02:57:04.459006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.128 [2024-11-17 02:57:04.476864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.128 [2024-11-17 02:57:04.477115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.129 [2024-11-17 02:57:04.477172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.129 [2024-11-17 02:57:04.494941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.129 [2024-11-17 02:57:04.495183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.129 [2024-11-17 02:57:04.495222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.129 [2024-11-17 02:57:04.513117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.129 [2024-11-17 02:57:04.513359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.129 [2024-11-17 02:57:04.513413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.129 [2024-11-17 02:57:04.531179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.129 [2024-11-17 02:57:04.531374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.129 [2024-11-17 02:57:04.531432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.129 [2024-11-17 02:57:04.549212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.129 [2024-11-17 02:57:04.549490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.129 [2024-11-17 02:57:04.549532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.129 [2024-11-17 02:57:04.567376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.129 [2024-11-17 02:57:04.567634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.129 [2024-11-17 02:57:04.567677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.129 [2024-11-17 02:57:04.585550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.129 [2024-11-17 02:57:04.585801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.129 [2024-11-17 02:57:04.585858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.387 [2024-11-17 02:57:04.603546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.387 [2024-11-17 02:57:04.603788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.387 [2024-11-17 02:57:04.603832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.387 [2024-11-17 02:57:04.621584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.387 [2024-11-17 02:57:04.621825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.387 [2024-11-17 02:57:04.621867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.387 [2024-11-17 02:57:04.639795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.387 [2024-11-17 02:57:04.640037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.387 [2024-11-17 02:57:04.640085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.387 [2024-11-17 02:57:04.658170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.387 [2024-11-17 02:57:04.658411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.387 [2024-11-17 02:57:04.658469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.388 [2024-11-17 02:57:04.676656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.388 [2024-11-17 02:57:04.676899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.388 [2024-11-17 02:57:04.676956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.388 [2024-11-17 02:57:04.694918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.388 [2024-11-17 02:57:04.695161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.388 [2024-11-17 02:57:04.695200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.388 [2024-11-17 02:57:04.713013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.388 [2024-11-17 02:57:04.713283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.388 [2024-11-17 02:57:04.713329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.388 [2024-11-17 02:57:04.731248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.388 [2024-11-17 02:57:04.731533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.388 [2024-11-17 02:57:04.731576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.388 [2024-11-17 02:57:04.749414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.388 [2024-11-17 02:57:04.749674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.388 [2024-11-17 02:57:04.749716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.388 [2024-11-17 02:57:04.767705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.388 [2024-11-17 02:57:04.767949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.388 [2024-11-17 02:57:04.767992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.388 [2024-11-17 02:57:04.785881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.388 [2024-11-17 02:57:04.786159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.388 [2024-11-17 02:57:04.786196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.388 [2024-11-17 02:57:04.803939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.388 [2024-11-17 02:57:04.804236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.388 [2024-11-17 02:57:04.804274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.388 [2024-11-17 02:57:04.822341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.388 [2024-11-17 02:57:04.822620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.388 [2024-11-17 02:57:04.822663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.388 [2024-11-17 02:57:04.840620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.388 [2024-11-17 02:57:04.840860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.388 [2024-11-17 02:57:04.840903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.646 [2024-11-17 02:57:04.858632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.646 [2024-11-17 02:57:04.858873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.647 [2024-11-17 02:57:04.858916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.647 [2024-11-17 02:57:04.876725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.647 [2024-11-17 02:57:04.876982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.647 [2024-11-17 02:57:04.877024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.647 [2024-11-17 02:57:04.894874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.647 [2024-11-17 02:57:04.895141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.647 [2024-11-17 02:57:04.895180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.647 [2024-11-17 02:57:04.913008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.647 [2024-11-17 02:57:04.913276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.647 [2024-11-17 02:57:04.913314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.647 [2024-11-17 02:57:04.931065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.647 [2024-11-17 02:57:04.931326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.647 [2024-11-17 02:57:04.931364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.647 [2024-11-17 02:57:04.949176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.647 [2024-11-17 02:57:04.949418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.647 [2024-11-17 02:57:04.949475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.647 [2024-11-17 02:57:04.967155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.647 [2024-11-17 02:57:04.967477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.647 [2024-11-17 02:57:04.967519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.647 [2024-11-17 02:57:04.985163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.647 [2024-11-17 02:57:04.985407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.647 [2024-11-17 02:57:04.985463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.647 [2024-11-17 02:57:05.003032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.647 [2024-11-17 02:57:05.003300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.647 [2024-11-17 02:57:05.003337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.647 [2024-11-17 02:57:05.020975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.647 [2024-11-17 02:57:05.021261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.647 [2024-11-17 02:57:05.021305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.647 [2024-11-17 02:57:05.038952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.647 [2024-11-17 02:57:05.039212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.647 [2024-11-17 02:57:05.039250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.647 [2024-11-17 02:57:05.056953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.647 [2024-11-17 02:57:05.057255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.647 [2024-11-17 02:57:05.057293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.647 [2024-11-17 02:57:05.075242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.647 [2024-11-17 02:57:05.075514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.647 [2024-11-17 02:57:05.075557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.647 [2024-11-17 02:57:05.093286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.647 [2024-11-17 02:57:05.093559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.647 [2024-11-17 02:57:05.093601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.905 [2024-11-17 02:57:05.111191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.905 [2024-11-17 02:57:05.111472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.905 [2024-11-17 02:57:05.111516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.905 [2024-11-17 02:57:05.129392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.905 [2024-11-17 02:57:05.129648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.905 [2024-11-17 02:57:05.129689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.905 [2024-11-17 02:57:05.147543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.906 [2024-11-17 02:57:05.147797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.906 [2024-11-17 02:57:05.147840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.906 [2024-11-17 02:57:05.165610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.906 [2024-11-17 02:57:05.165820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.906 [2024-11-17 02:57:05.165862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.906 [2024-11-17 02:57:05.183797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.906 [2024-11-17 02:57:05.184049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.906 [2024-11-17 02:57:05.184091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.906 [2024-11-17 02:57:05.201899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.906 [2024-11-17 02:57:05.202154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.906 [2024-11-17 02:57:05.202192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.906 [2024-11-17 02:57:05.220006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.906 [2024-11-17 02:57:05.220252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.906 [2024-11-17 02:57:05.220290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.906 [2024-11-17 02:57:05.238114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.906 [2024-11-17 02:57:05.238383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.906 [2024-11-17 02:57:05.238444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.906 [2024-11-17 02:57:05.256501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.906 [2024-11-17 02:57:05.256743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.906 [2024-11-17 02:57:05.256785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.906 14225.50 IOPS, 55.57 MiB/s [2024-11-17T01:57:05.366Z] [2024-11-17 02:57:05.274538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:36:56.906 [2024-11-17 02:57:05.274777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:56.906 [2024-11-17 02:57:05.274819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:56.906 00:36:56.906 Latency(us) 00:36:56.906 [2024-11-17T01:57:05.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:56.906 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:56.906 nvme0n1 : 2.01 14226.81 55.57 0.00 0.00 8970.53 4029.25 20583.16 00:36:56.906 [2024-11-17T01:57:05.366Z] =================================================================================================================== 00:36:56.906 [2024-11-17T01:57:05.366Z] Total : 14226.81 55.57 0.00 0.00 8970.53 4029.25 20583.16 00:36:56.906 { 00:36:56.906 "results": [ 00:36:56.906 { 00:36:56.906 "job": "nvme0n1", 00:36:56.906 "core_mask": "0x2", 00:36:56.906 "workload": "randwrite", 00:36:56.906 "status": "finished", 00:36:56.906 "queue_depth": 128, 00:36:56.906 "io_size": 4096, 00:36:56.906 "runtime": 2.011062, 00:36:56.906 "iops": 14226.811505562733, 00:36:56.906 "mibps": 55.573482443604426, 00:36:56.906 "io_failed": 0, 00:36:56.906 "io_timeout": 0, 00:36:56.906 "avg_latency_us": 8970.530830385102, 00:36:56.906 "min_latency_us": 4029.2503703703705, 00:36:56.906 "max_latency_us": 20583.158518518518 00:36:56.906 } 00:36:56.906 ], 00:36:56.906 "core_count": 1 00:36:56.906 } 00:36:56.906 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:56.906 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:56.906 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:56.906 | .driver_specific 00:36:56.906 | .nvme_error 00:36:56.906 | .status_code 00:36:56.906 | .command_transient_transport_error' 00:36:56.906 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:57.164 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 112 > 0 )) 00:36:57.164 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3133670 00:36:57.164 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3133670 ']' 00:36:57.164 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3133670 00:36:57.164 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:57.164 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:57.164 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3133670 00:36:57.164 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:57.164 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:57.164 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3133670' 00:36:57.164 killing process with pid 3133670 00:36:57.164 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3133670 00:36:57.164 Received shutdown signal, test time was about 2.000000 seconds 00:36:57.164 00:36:57.164 Latency(us) 00:36:57.164 [2024-11-17T01:57:05.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:57.164 [2024-11-17T01:57:05.624Z] =================================================================================================================== 00:36:57.164 [2024-11-17T01:57:05.624Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:57.164 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3133670 00:36:58.099 02:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:58.099 02:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:58.099 02:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:58.099 02:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:58.099 02:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:58.099 02:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3134273 00:36:58.099 02:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:58.099 02:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3134273 /var/tmp/bperf.sock 00:36:58.099 02:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3134273 ']' 00:36:58.099 02:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:58.099 02:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:58.099 02:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:58.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:58.099 02:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:58.099 02:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:58.358 [2024-11-17 02:57:06.574939] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:58.358 [2024-11-17 02:57:06.575069] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3134273 ] 00:36:58.358 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:58.358 Zero copy mechanism will not be used. 00:36:58.358 [2024-11-17 02:57:06.710930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:58.616 [2024-11-17 02:57:06.841674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:59.182 02:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:59.182 02:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:59.182 02:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:59.182 02:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:59.440 02:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:59.440 02:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.440 02:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:59.440 02:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.440 02:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:59.440 02:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:00.006 nvme0n1 00:37:00.006 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:00.006 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.006 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:00.007 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.007 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:00.007 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:00.007 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:00.007 Zero copy mechanism will not be used. 00:37:00.007 Running I/O for 2 seconds... 00:37:00.007 [2024-11-17 02:57:08.355464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.007 [2024-11-17 02:57:08.355757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.007 [2024-11-17 02:57:08.355813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.007 [2024-11-17 02:57:08.363826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.007 [2024-11-17 02:57:08.363970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.007 [2024-11-17 02:57:08.364025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.007 [2024-11-17 02:57:08.371368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.007 [2024-11-17 02:57:08.371524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.007 [2024-11-17 02:57:08.371569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.007 [2024-11-17 02:57:08.378852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.007 [2024-11-17 02:57:08.378986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.007 [2024-11-17 02:57:08.379030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.007 [2024-11-17 02:57:08.386167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.007 [2024-11-17 02:57:08.386283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.007 [2024-11-17 02:57:08.386321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.007 [2024-11-17 02:57:08.393430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.007 [2024-11-17 02:57:08.393574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.007 [2024-11-17 02:57:08.393618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.007 [2024-11-17 02:57:08.400734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.007 [2024-11-17 02:57:08.400873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.007 [2024-11-17 02:57:08.400917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.007 [2024-11-17 02:57:08.407973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.007 [2024-11-17 02:57:08.408111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.007 [2024-11-17 02:57:08.408168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.007 [2024-11-17 02:57:08.415328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.007 [2024-11-17 02:57:08.415451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.007 [2024-11-17 02:57:08.415494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.007 [2024-11-17 02:57:08.423257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.007 [2024-11-17 02:57:08.423490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.007 [2024-11-17 02:57:08.423533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.007 [2024-11-17 02:57:08.431242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.007 [2024-11-17 02:57:08.431469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.007 [2024-11-17 02:57:08.431512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.007 [2024-11-17 02:57:08.439824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.007 [2024-11-17 02:57:08.440012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.007 [2024-11-17 02:57:08.440055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.007 [2024-11-17 02:57:08.447737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.007 [2024-11-17 02:57:08.447861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.007 [2024-11-17 02:57:08.447904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.007 [2024-11-17 02:57:08.455083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.007 [2024-11-17 02:57:08.455225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.007 [2024-11-17 02:57:08.455263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.007 [2024-11-17 02:57:08.462327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.007 [2024-11-17 02:57:08.462447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.007 [2024-11-17 02:57:08.462500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.267 [2024-11-17 02:57:08.469527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.267 [2024-11-17 02:57:08.469695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.267 [2024-11-17 02:57:08.469750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.267 [2024-11-17 02:57:08.477105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.267 [2024-11-17 02:57:08.477215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.267 [2024-11-17 02:57:08.477260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.267 [2024-11-17 02:57:08.484688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.267 [2024-11-17 02:57:08.484852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.267 [2024-11-17 02:57:08.484896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.267 [2024-11-17 02:57:08.492036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.267 [2024-11-17 02:57:08.492212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.267 [2024-11-17 02:57:08.492251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.267 [2024-11-17 02:57:08.499509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.267 [2024-11-17 02:57:08.499631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.267 [2024-11-17 02:57:08.499676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.267 [2024-11-17 02:57:08.507327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.267 [2024-11-17 02:57:08.507484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.267 [2024-11-17 02:57:08.507529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.267 [2024-11-17 02:57:08.514944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.267 [2024-11-17 02:57:08.515064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.267 [2024-11-17 02:57:08.515125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.267 [2024-11-17 02:57:08.522610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.267 [2024-11-17 02:57:08.522725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.267 [2024-11-17 02:57:08.522768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.267 [2024-11-17 02:57:08.530463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.267 [2024-11-17 02:57:08.530599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.267 [2024-11-17 02:57:08.530642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.267 [2024-11-17 02:57:08.538335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.267 [2024-11-17 02:57:08.538556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.267 [2024-11-17 02:57:08.538599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.267 [2024-11-17 02:57:08.546541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.267 [2024-11-17 02:57:08.546720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.267 [2024-11-17 02:57:08.546763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.267 [2024-11-17 02:57:08.554974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.267 [2024-11-17 02:57:08.555195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.267 [2024-11-17 02:57:08.555237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.267 [2024-11-17 02:57:08.563471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.267 [2024-11-17 02:57:08.563729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.267 [2024-11-17 02:57:08.563782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.267 [2024-11-17 02:57:08.571796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.267 [2024-11-17 02:57:08.571999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.267 [2024-11-17 02:57:08.572043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.268 [2024-11-17 02:57:08.580292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.268 [2024-11-17 02:57:08.580463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.268 [2024-11-17 02:57:08.580506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.268 [2024-11-17 02:57:08.588540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.268 [2024-11-17 02:57:08.588757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.268 [2024-11-17 02:57:08.588800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.268 [2024-11-17 02:57:08.596298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.268 [2024-11-17 02:57:08.596469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.268 [2024-11-17 02:57:08.596512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.268 [2024-11-17 02:57:08.603749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.268 [2024-11-17 02:57:08.603909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.268 [2024-11-17 02:57:08.603952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.268 [2024-11-17 02:57:08.611200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.268 [2024-11-17 02:57:08.611323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.268 [2024-11-17 02:57:08.611360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.268 [2024-11-17 02:57:08.619919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.268 [2024-11-17 02:57:08.620112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.268 [2024-11-17 02:57:08.620171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.268 [2024-11-17 02:57:08.627560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.268 [2024-11-17 02:57:08.627746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.268 [2024-11-17 02:57:08.627791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.268 [2024-11-17 02:57:08.634969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.268 [2024-11-17 02:57:08.635152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.268 [2024-11-17 02:57:08.635197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.268 [2024-11-17 02:57:08.642596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.268 [2024-11-17 02:57:08.642737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.268 [2024-11-17 02:57:08.642781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.268 [2024-11-17 02:57:08.650190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.268 [2024-11-17 02:57:08.650361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.268 [2024-11-17 02:57:08.650410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.268 [2024-11-17 02:57:08.658056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.268 [2024-11-17 02:57:08.658271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.268 [2024-11-17 02:57:08.658315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.268 [2024-11-17 02:57:08.666164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.268 [2024-11-17 02:57:08.666312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.268 [2024-11-17 02:57:08.666357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.268 [2024-11-17 02:57:08.673502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.268 [2024-11-17 02:57:08.673634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.268 [2024-11-17 02:57:08.673677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.268 [2024-11-17 02:57:08.681086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.268 [2024-11-17 02:57:08.681259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.268 [2024-11-17 02:57:08.681303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.268 [2024-11-17 02:57:08.688603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.268 [2024-11-17 02:57:08.688776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.268 [2024-11-17 02:57:08.688829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.268 [2024-11-17 02:57:08.696200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.268 [2024-11-17 02:57:08.696421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.268 [2024-11-17 02:57:08.696476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.268 [2024-11-17 02:57:08.703587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.268 [2024-11-17 02:57:08.703771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.268 [2024-11-17 02:57:08.703821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.268 [2024-11-17 02:57:08.711123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.268 [2024-11-17 02:57:08.711310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.268 [2024-11-17 02:57:08.711349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.268 [2024-11-17 02:57:08.718407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.268 [2024-11-17 02:57:08.718623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.268 [2024-11-17 02:57:08.718667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.268 [2024-11-17 02:57:08.725814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.268 [2024-11-17 02:57:08.726016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.268 [2024-11-17 02:57:08.726060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.528 [2024-11-17 02:57:08.733196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.528 [2024-11-17 02:57:08.733418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.528 [2024-11-17 02:57:08.733472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.528 [2024-11-17 02:57:08.740779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.528 [2024-11-17 02:57:08.740980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.528 [2024-11-17 02:57:08.741024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.528 [2024-11-17 02:57:08.748363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.528 [2024-11-17 02:57:08.748556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.528 [2024-11-17 02:57:08.748600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.528 [2024-11-17 02:57:08.755726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.528 [2024-11-17 02:57:08.755942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.528 [2024-11-17 02:57:08.755985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.528 [2024-11-17 02:57:08.763133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.528 [2024-11-17 02:57:08.763345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.528 [2024-11-17 02:57:08.763388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.528 [2024-11-17 02:57:08.770368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.528 [2024-11-17 02:57:08.770600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.528 [2024-11-17 02:57:08.770643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.528 [2024-11-17 02:57:08.777742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.528 [2024-11-17 02:57:08.777960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.528 [2024-11-17 02:57:08.778004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.528 [2024-11-17 02:57:08.785094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.528 [2024-11-17 02:57:08.785299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.528 [2024-11-17 02:57:08.785338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.528 [2024-11-17 02:57:08.792313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.528 [2024-11-17 02:57:08.792530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.528 [2024-11-17 02:57:08.792573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.528 [2024-11-17 02:57:08.799748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.528 [2024-11-17 02:57:08.799969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.528 [2024-11-17 02:57:08.800022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.528 [2024-11-17 02:57:08.807459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.528 [2024-11-17 02:57:08.807662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.528 [2024-11-17 02:57:08.807706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.528 [2024-11-17 02:57:08.814887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.528 [2024-11-17 02:57:08.815150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.528 [2024-11-17 02:57:08.815190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.528 [2024-11-17 02:57:08.822235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.528 [2024-11-17 02:57:08.822474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.528 [2024-11-17 02:57:08.822542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.528 [2024-11-17 02:57:08.829514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.528 [2024-11-17 02:57:08.829709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.528 [2024-11-17 02:57:08.829751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.528 [2024-11-17 02:57:08.836915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.528 [2024-11-17 02:57:08.837155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.528 [2024-11-17 02:57:08.837194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.528 [2024-11-17 02:57:08.844229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.528 [2024-11-17 02:57:08.844461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.528 [2024-11-17 02:57:08.844504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.528 [2024-11-17 02:57:08.851328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.528 [2024-11-17 02:57:08.851528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.528 [2024-11-17 02:57:08.851571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.528 [2024-11-17 02:57:08.858528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.528 [2024-11-17 02:57:08.858728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.528 [2024-11-17 02:57:08.858770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.528 [2024-11-17 02:57:08.865649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.528 [2024-11-17 02:57:08.865884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.528 [2024-11-17 02:57:08.865927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.528 [2024-11-17 02:57:08.873411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.529 [2024-11-17 02:57:08.873584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.529 [2024-11-17 02:57:08.873628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.529 [2024-11-17 02:57:08.881280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.529 [2024-11-17 02:57:08.881467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.529 [2024-11-17 02:57:08.881510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.529 [2024-11-17 02:57:08.888726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.529 [2024-11-17 02:57:08.888940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.529 [2024-11-17 02:57:08.888983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.529 [2024-11-17 02:57:08.897223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.529 [2024-11-17 02:57:08.897452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.529 [2024-11-17 02:57:08.897496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.529 [2024-11-17 02:57:08.904815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.529 [2024-11-17 02:57:08.905041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.529 [2024-11-17 02:57:08.905088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.529 [2024-11-17 02:57:08.912483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.529 [2024-11-17 02:57:08.912651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.529 [2024-11-17 02:57:08.912693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.529 [2024-11-17 02:57:08.919787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.529 [2024-11-17 02:57:08.919908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.529 [2024-11-17 02:57:08.919950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.529 [2024-11-17 02:57:08.927067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.529 [2024-11-17 02:57:08.927299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.529 [2024-11-17 02:57:08.927338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.529 [2024-11-17 02:57:08.934736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.529 [2024-11-17 02:57:08.934867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.529 [2024-11-17 02:57:08.934911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.529 [2024-11-17 02:57:08.942984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.529 [2024-11-17 02:57:08.943159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.529 [2024-11-17 02:57:08.943198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.529 [2024-11-17 02:57:08.950250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.529 [2024-11-17 02:57:08.950374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.529 [2024-11-17 02:57:08.950443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.529 [2024-11-17 02:57:08.957660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.529 [2024-11-17 02:57:08.957792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.529 [2024-11-17 02:57:08.957836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.529 [2024-11-17 02:57:08.964975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.529 [2024-11-17 02:57:08.965091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.529 [2024-11-17 02:57:08.965156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.529 [2024-11-17 02:57:08.972661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.529 [2024-11-17 02:57:08.972875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.529 [2024-11-17 02:57:08.972918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.529 [2024-11-17 02:57:08.981338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.529 [2024-11-17 02:57:08.981513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.529 [2024-11-17 02:57:08.981556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.790 [2024-11-17 02:57:08.989685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.790 [2024-11-17 02:57:08.989899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.790 [2024-11-17 02:57:08.989944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.790 [2024-11-17 02:57:08.997009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.790 [2024-11-17 02:57:08.997271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.790 [2024-11-17 02:57:08.997311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.790 [2024-11-17 02:57:09.004248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.790 [2024-11-17 02:57:09.004437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.790 [2024-11-17 02:57:09.004496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.790 [2024-11-17 02:57:09.011560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.790 [2024-11-17 02:57:09.011772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.790 [2024-11-17 02:57:09.011815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.790 [2024-11-17 02:57:09.018909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.790 [2024-11-17 02:57:09.019159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.790 [2024-11-17 02:57:09.019208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.791 [2024-11-17 02:57:09.026052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.791 [2024-11-17 02:57:09.026262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.791 [2024-11-17 02:57:09.026301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.791 [2024-11-17 02:57:09.033317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.791 [2024-11-17 02:57:09.033530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.791 [2024-11-17 02:57:09.033572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.791 [2024-11-17 02:57:09.040766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.791 [2024-11-17 02:57:09.040990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.791 [2024-11-17 02:57:09.041033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.791 [2024-11-17 02:57:09.048140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.791 [2024-11-17 02:57:09.048349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.791 [2024-11-17 02:57:09.048413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.791 [2024-11-17 02:57:09.055357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.791 [2024-11-17 02:57:09.055541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.791 [2024-11-17 02:57:09.055584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.791 [2024-11-17 02:57:09.062863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.791 [2024-11-17 02:57:09.063089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.791 [2024-11-17 02:57:09.063166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.791 [2024-11-17 02:57:09.070030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.791 [2024-11-17 02:57:09.070280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.791 [2024-11-17 02:57:09.070319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.791 [2024-11-17 02:57:09.077223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.791 [2024-11-17 02:57:09.077338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.791 [2024-11-17 02:57:09.077378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.791 [2024-11-17 02:57:09.084647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.791 [2024-11-17 02:57:09.084880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.791 [2024-11-17 02:57:09.084923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.791 [2024-11-17 02:57:09.091981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.791 [2024-11-17 02:57:09.092196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.791 [2024-11-17 02:57:09.092235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.791 [2024-11-17 02:57:09.099337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.791 [2024-11-17 02:57:09.099549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.791 [2024-11-17 02:57:09.099593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.791 [2024-11-17 02:57:09.106633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.791 [2024-11-17 02:57:09.106809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.791 [2024-11-17 02:57:09.106852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.791 [2024-11-17 02:57:09.113774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.791 [2024-11-17 02:57:09.114005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.791 [2024-11-17 02:57:09.114048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.791 [2024-11-17 02:57:09.121021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.791 [2024-11-17 02:57:09.121261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.791 [2024-11-17 02:57:09.121301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.791 [2024-11-17 02:57:09.128606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.791 [2024-11-17 02:57:09.128786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.791 [2024-11-17 02:57:09.128830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.791 [2024-11-17 02:57:09.135991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.791 [2024-11-17 02:57:09.136206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.791 [2024-11-17 02:57:09.136246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.791 [2024-11-17 02:57:09.143179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.791 [2024-11-17 02:57:09.143307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.791 [2024-11-17 02:57:09.143353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.791 [2024-11-17 02:57:09.150408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.791 [2024-11-17 02:57:09.150634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.791 [2024-11-17 02:57:09.150677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.791 [2024-11-17 02:57:09.157737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.791 [2024-11-17 02:57:09.157949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.791 [2024-11-17 02:57:09.157991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.791 [2024-11-17 02:57:09.165902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.791 [2024-11-17 02:57:09.166009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.791 [2024-11-17 02:57:09.166049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.791 [2024-11-17 02:57:09.173737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.791 [2024-11-17 02:57:09.173954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.791 [2024-11-17 02:57:09.173998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.791 [2024-11-17 02:57:09.181388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.791 [2024-11-17 02:57:09.181577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.792 [2024-11-17 02:57:09.181620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.792 [2024-11-17 02:57:09.188593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.792 [2024-11-17 02:57:09.188723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.792 [2024-11-17 02:57:09.188765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.792 [2024-11-17 02:57:09.196087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.792 [2024-11-17 02:57:09.196289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.792 [2024-11-17 02:57:09.196328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.792 [2024-11-17 02:57:09.204572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.792 [2024-11-17 02:57:09.204752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.792 [2024-11-17 02:57:09.204801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.792 [2024-11-17 02:57:09.212505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.792 [2024-11-17 02:57:09.212687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.792 [2024-11-17 02:57:09.212730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.792 [2024-11-17 02:57:09.220328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.792 [2024-11-17 02:57:09.220565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.792 [2024-11-17 02:57:09.220608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.792 [2024-11-17 02:57:09.227811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.792 [2024-11-17 02:57:09.228037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.792 [2024-11-17 02:57:09.228088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.792 [2024-11-17 02:57:09.235289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.792 [2024-11-17 02:57:09.235499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.792 [2024-11-17 02:57:09.235542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:00.792 [2024-11-17 02:57:09.242746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:00.792 [2024-11-17 02:57:09.242981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:00.792 [2024-11-17 02:57:09.243024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.075 [2024-11-17 02:57:09.250552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.075 [2024-11-17 02:57:09.250753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.075 [2024-11-17 02:57:09.250821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.075 [2024-11-17 02:57:09.257903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.075 [2024-11-17 02:57:09.258160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.075 [2024-11-17 02:57:09.258215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.075 [2024-11-17 02:57:09.265318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.075 [2024-11-17 02:57:09.265504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.075 [2024-11-17 02:57:09.265550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.075 [2024-11-17 02:57:09.272862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.075 [2024-11-17 02:57:09.273006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.075 [2024-11-17 02:57:09.273059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.075 [2024-11-17 02:57:09.280470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.075 [2024-11-17 02:57:09.280669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.075 [2024-11-17 02:57:09.280723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.075 [2024-11-17 02:57:09.288781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.075 [2024-11-17 02:57:09.288995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.075 [2024-11-17 02:57:09.289038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.075 [2024-11-17 02:57:09.297238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.075 [2024-11-17 02:57:09.297360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.075 [2024-11-17 02:57:09.297426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.075 [2024-11-17 02:57:09.304883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.075 [2024-11-17 02:57:09.305076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.075 [2024-11-17 02:57:09.305156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.075 [2024-11-17 02:57:09.312553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.075 [2024-11-17 02:57:09.312708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.075 [2024-11-17 02:57:09.312751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.075 [2024-11-17 02:57:09.320015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.075 [2024-11-17 02:57:09.320228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.075 [2024-11-17 02:57:09.320268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.075 [2024-11-17 02:57:09.327228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.075 [2024-11-17 02:57:09.327444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.075 [2024-11-17 02:57:09.327496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.075 [2024-11-17 02:57:09.334572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.075 [2024-11-17 02:57:09.334769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.075 [2024-11-17 02:57:09.334812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.075 [2024-11-17 02:57:09.341875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.075 [2024-11-17 02:57:09.342021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.075 [2024-11-17 02:57:09.342064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.075 [2024-11-17 02:57:09.349225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.075 [2024-11-17 02:57:09.349426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.075 [2024-11-17 02:57:09.349486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.075 4088.00 IOPS, 511.00 MiB/s [2024-11-17T01:57:09.535Z] [2024-11-17 02:57:09.358155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.358331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.358369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.076 [2024-11-17 02:57:09.366306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.366527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.366571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.076 [2024-11-17 02:57:09.373551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.373678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.373719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.076 [2024-11-17 02:57:09.381200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.381337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.381378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.076 [2024-11-17 02:57:09.388620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.388750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.388794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.076 [2024-11-17 02:57:09.395934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.396087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.396151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.076 [2024-11-17 02:57:09.403352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.403573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.403616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.076 [2024-11-17 02:57:09.410706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.410909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.410952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.076 [2024-11-17 02:57:09.418208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.418315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.418354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.076 [2024-11-17 02:57:09.426247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.426452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.426495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.076 [2024-11-17 02:57:09.433782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.433981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.434040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.076 [2024-11-17 02:57:09.441186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.441384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.441442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.076 [2024-11-17 02:57:09.448529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.448740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.448783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.076 [2024-11-17 02:57:09.455642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.455850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.455894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.076 [2024-11-17 02:57:09.462940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.463175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.463215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.076 [2024-11-17 02:57:09.470265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.470474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.470517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.076 [2024-11-17 02:57:09.477637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.477821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.477863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.076 [2024-11-17 02:57:09.484952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.485206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.485246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.076 [2024-11-17 02:57:09.492251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.492471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.492516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.076 [2024-11-17 02:57:09.499521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.499698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.499740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.076 [2024-11-17 02:57:09.506795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.507041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.507084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.076 [2024-11-17 02:57:09.514158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.514335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.514373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.076 [2024-11-17 02:57:09.521371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.076 [2024-11-17 02:57:09.521603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.076 [2024-11-17 02:57:09.521660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.360 [2024-11-17 02:57:09.528907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.360 [2024-11-17 02:57:09.529150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.360 [2024-11-17 02:57:09.529208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.360 [2024-11-17 02:57:09.537335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.360 [2024-11-17 02:57:09.537544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.360 [2024-11-17 02:57:09.537590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.360 [2024-11-17 02:57:09.544501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.360 [2024-11-17 02:57:09.544672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.360 [2024-11-17 02:57:09.544713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.360 [2024-11-17 02:57:09.551412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.360 [2024-11-17 02:57:09.551572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.360 [2024-11-17 02:57:09.551614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.360 [2024-11-17 02:57:09.558639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.360 [2024-11-17 02:57:09.558859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.360 [2024-11-17 02:57:09.558903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.360 [2024-11-17 02:57:09.565749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.360 [2024-11-17 02:57:09.565968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.360 [2024-11-17 02:57:09.566010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.360 [2024-11-17 02:57:09.572969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.360 [2024-11-17 02:57:09.573145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.360 [2024-11-17 02:57:09.573185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.360 [2024-11-17 02:57:09.580287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.360 [2024-11-17 02:57:09.580484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.360 [2024-11-17 02:57:09.580525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.360 [2024-11-17 02:57:09.588511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.360 [2024-11-17 02:57:09.588697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.360 [2024-11-17 02:57:09.588737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.595798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.595970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.596025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.602874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.603017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.603060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.610118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.610237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.610276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.618107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.618282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.618321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.625324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.625487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.625529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.632774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.632885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.632929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.640503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.640711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.640754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.648477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.648701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.648744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.655967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.656195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.656234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.663280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.663402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.663461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.670598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.670811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.670855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.677887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.678115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.678172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.685121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.685259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.685299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.692452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.692674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.692718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.699838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.700042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.700087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.707147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.707357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.707396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.714340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.714519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.714561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.721562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.721772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.721823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.728837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.729081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.729147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.736143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.736350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.736394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.743376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.743608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.743651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.750718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.750948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.750991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.757956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.758205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.758243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.765472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.765693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.765737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.773269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.773481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.773524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.782162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.782269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.782312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.789942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.790072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.790146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.797328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.797425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.797482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.361 [2024-11-17 02:57:09.804274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.361 [2024-11-17 02:57:09.804386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.361 [2024-11-17 02:57:09.804426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.811511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.811620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.811668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.819005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.819141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.819206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.826650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.826771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.826816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.834249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.834353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.834390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.841768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.841884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.841928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.850360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.850558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.850602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.858035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.858195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.858234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.865651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.865857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.865898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.873211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.873410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.873461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.880483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.880684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.880727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.888411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.888617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.888660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.895913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.896180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.896221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.903242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.903496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.903542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.910311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.910553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.910614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.917565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.917776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.917830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.924747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.924962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.925007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.932212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.932431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.932473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.939515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.939718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.939760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.946716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.946920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.946963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.953987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.954191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.954231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.961139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.961341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.961380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.968385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.968532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.968575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.975719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.975927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.975971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.983115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.983298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.983337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.990303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.990456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.990500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:09.997646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:09.997883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:09.997929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:10.004963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:10.005182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:10.005228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:10.012256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.621 [2024-11-17 02:57:10.012446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.621 [2024-11-17 02:57:10.012492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.621 [2024-11-17 02:57:10.019108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.622 [2024-11-17 02:57:10.019352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.622 [2024-11-17 02:57:10.019392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.622 [2024-11-17 02:57:10.026163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.622 [2024-11-17 02:57:10.026430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.622 [2024-11-17 02:57:10.026477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.622 [2024-11-17 02:57:10.032881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.622 [2024-11-17 02:57:10.033171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.622 [2024-11-17 02:57:10.033216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.622 [2024-11-17 02:57:10.039224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.622 [2024-11-17 02:57:10.039563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.622 [2024-11-17 02:57:10.039623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.622 [2024-11-17 02:57:10.046351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.622 [2024-11-17 02:57:10.046775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.622 [2024-11-17 02:57:10.046817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.622 [2024-11-17 02:57:10.053684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.622 [2024-11-17 02:57:10.054078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.622 [2024-11-17 02:57:10.054133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.622 [2024-11-17 02:57:10.061210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.622 [2024-11-17 02:57:10.061598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.622 [2024-11-17 02:57:10.061645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.622 [2024-11-17 02:57:10.068453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.622 [2024-11-17 02:57:10.068894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.622 [2024-11-17 02:57:10.068939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.622 [2024-11-17 02:57:10.075754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.622 [2024-11-17 02:57:10.076171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.622 [2024-11-17 02:57:10.076224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.082812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.083263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.083304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.090200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.090588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.090642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.097792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.098188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.098233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.105041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.105477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.105521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.112044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.112469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.112516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.119183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.119637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.119681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.126133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.126523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.126561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.133092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.133513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.133557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.140420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.140839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.140893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.147373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.147733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.147778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.154233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.154627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.154680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.161050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.161488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.161545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.168132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.168508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.168563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.175018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.175475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.175541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.182009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.182397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.182450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.188859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.189309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.189350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.195834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.196235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.196279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.202661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.203043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.203088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.209577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.209960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.210010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.216366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.216750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.216818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.223313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.223700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.223755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.230109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.230539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.230583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.236972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.237382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.237442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.243799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.244193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.244243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.250631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.251034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.251084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.257487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.257845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.257890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.264341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.264768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.264820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.271189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.271601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.271651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.881 [2024-11-17 02:57:10.277908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.881 [2024-11-17 02:57:10.278336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.881 [2024-11-17 02:57:10.278377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.882 [2024-11-17 02:57:10.284778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.882 [2024-11-17 02:57:10.285169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.882 [2024-11-17 02:57:10.285210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.882 [2024-11-17 02:57:10.291710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.882 [2024-11-17 02:57:10.292089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.882 [2024-11-17 02:57:10.292147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.882 [2024-11-17 02:57:10.299474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.882 [2024-11-17 02:57:10.299975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.882 [2024-11-17 02:57:10.300015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.882 [2024-11-17 02:57:10.306519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.882 [2024-11-17 02:57:10.306894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.882 [2024-11-17 02:57:10.306946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:01.882 [2024-11-17 02:57:10.313435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.882 [2024-11-17 02:57:10.313839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.882 [2024-11-17 02:57:10.313884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.882 [2024-11-17 02:57:10.320498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.882 [2024-11-17 02:57:10.320907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.882 [2024-11-17 02:57:10.320962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:01.882 [2024-11-17 02:57:10.327603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.882 [2024-11-17 02:57:10.327985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.882 [2024-11-17 02:57:10.328055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:01.882 [2024-11-17 02:57:10.334613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:01.882 [2024-11-17 02:57:10.335087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:01.882 [2024-11-17 02:57:10.335167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:02.140 [2024-11-17 02:57:10.341517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:02.140 [2024-11-17 02:57:10.341924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.140 [2024-11-17 02:57:10.341988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:02.140 [2024-11-17 02:57:10.348356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:02.140 [2024-11-17 02:57:10.348784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.140 [2024-11-17 02:57:10.348839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:02.140 [2024-11-17 02:57:10.355503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:02.140 [2024-11-17 02:57:10.355880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:02.140 [2024-11-17 02:57:10.355924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:02.140 4174.50 IOPS, 521.81 MiB/s 00:37:02.140 Latency(us) 00:37:02.140 [2024-11-17T01:57:10.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:02.140 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:02.140 nvme0n1 : 2.01 4173.34 521.67 0.00 0.00 3823.08 3009.80 9077.95 00:37:02.140 [2024-11-17T01:57:10.600Z] =================================================================================================================== 00:37:02.140 [2024-11-17T01:57:10.600Z] Total : 4173.34 521.67 0.00 0.00 3823.08 3009.80 9077.95 00:37:02.140 { 00:37:02.140 "results": [ 00:37:02.140 { 00:37:02.140 "job": "nvme0n1", 00:37:02.140 "core_mask": "0x2", 00:37:02.140 "workload": "randwrite", 00:37:02.140 "status": "finished", 00:37:02.140 "queue_depth": 16, 00:37:02.140 "io_size": 131072, 00:37:02.140 "runtime": 2.005589, 00:37:02.140 "iops": 4173.337608054292, 00:37:02.140 "mibps": 521.6672010067865, 00:37:02.140 "io_failed": 0, 00:37:02.140 "io_timeout": 0, 00:37:02.140 "avg_latency_us": 3823.076155936103, 00:37:02.140 "min_latency_us": 3009.8014814814815, 00:37:02.140 "max_latency_us": 9077.94962962963 00:37:02.140 } 00:37:02.140 ], 00:37:02.140 "core_count": 1 00:37:02.140 } 00:37:02.140 02:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:02.140 02:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:02.140 02:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:02.140 | .driver_specific 00:37:02.140 | .nvme_error 00:37:02.140 | .status_code 00:37:02.140 | .command_transient_transport_error' 00:37:02.140 02:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:02.397 02:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 270 > 0 )) 00:37:02.397 02:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3134273 00:37:02.397 02:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3134273 ']' 00:37:02.397 02:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3134273 00:37:02.397 02:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:02.397 02:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:02.397 02:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3134273 00:37:02.397 02:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:02.397 02:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:02.397 02:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3134273' 00:37:02.397 killing process with pid 3134273 00:37:02.397 02:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3134273 00:37:02.397 Received shutdown signal, test time was about 2.000000 seconds 00:37:02.397 00:37:02.397 Latency(us) 00:37:02.397 [2024-11-17T01:57:10.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:02.397 [2024-11-17T01:57:10.857Z] =================================================================================================================== 00:37:02.397 [2024-11-17T01:57:10.857Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:02.397 02:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3134273 00:37:03.331 02:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3132133 00:37:03.331 02:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3132133 ']' 00:37:03.331 02:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3132133 00:37:03.331 02:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:03.331 02:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:03.331 02:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3132133 00:37:03.331 02:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:03.331 02:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:03.331 02:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3132133' 00:37:03.331 killing process with pid 3132133 00:37:03.331 02:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3132133 00:37:03.331 02:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3132133 00:37:04.706 00:37:04.706 real 0m23.121s 00:37:04.706 user 0m45.240s 00:37:04.706 sys 0m4.719s 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:04.706 ************************************ 00:37:04.706 END TEST nvmf_digest_error 00:37:04.706 ************************************ 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:04.706 rmmod nvme_tcp 00:37:04.706 rmmod nvme_fabrics 00:37:04.706 rmmod nvme_keyring 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3132133 ']' 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3132133 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3132133 ']' 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3132133 00:37:04.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3132133) - No such process 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3132133 is not found' 00:37:04.706 Process with pid 3132133 is not found 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:04.706 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:06.610 02:57:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:06.610 00:37:06.610 real 0m52.212s 00:37:06.610 user 1m33.492s 00:37:06.610 sys 0m11.084s 00:37:06.610 02:57:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:06.610 02:57:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:06.610 ************************************ 00:37:06.610 END TEST nvmf_digest 00:37:06.610 ************************************ 00:37:06.610 02:57:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:37:06.610 02:57:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:37:06.610 02:57:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:37:06.610 02:57:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:06.610 02:57:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:06.610 02:57:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:06.610 02:57:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.610 ************************************ 00:37:06.610 START TEST nvmf_bdevperf 00:37:06.610 ************************************ 00:37:06.610 02:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:06.610 * Looking for test storage... 00:37:06.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:06.610 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:06.610 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:37:06.610 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:06.870 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:06.870 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:06.870 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:06.870 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:06.870 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:37:06.870 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:06.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:06.871 --rc genhtml_branch_coverage=1 00:37:06.871 --rc genhtml_function_coverage=1 00:37:06.871 --rc genhtml_legend=1 00:37:06.871 --rc geninfo_all_blocks=1 00:37:06.871 --rc geninfo_unexecuted_blocks=1 00:37:06.871 00:37:06.871 ' 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:06.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:06.871 --rc genhtml_branch_coverage=1 00:37:06.871 --rc genhtml_function_coverage=1 00:37:06.871 --rc genhtml_legend=1 00:37:06.871 --rc geninfo_all_blocks=1 00:37:06.871 --rc geninfo_unexecuted_blocks=1 00:37:06.871 00:37:06.871 ' 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:06.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:06.871 --rc genhtml_branch_coverage=1 00:37:06.871 --rc genhtml_function_coverage=1 00:37:06.871 --rc genhtml_legend=1 00:37:06.871 --rc geninfo_all_blocks=1 00:37:06.871 --rc geninfo_unexecuted_blocks=1 00:37:06.871 00:37:06.871 ' 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:06.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:06.871 --rc genhtml_branch_coverage=1 00:37:06.871 --rc genhtml_function_coverage=1 00:37:06.871 --rc genhtml_legend=1 00:37:06.871 --rc geninfo_all_blocks=1 00:37:06.871 --rc geninfo_unexecuted_blocks=1 00:37:06.871 00:37:06.871 ' 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:06.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:37:06.871 02:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:08.775 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:08.775 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:08.775 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:08.775 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:08.775 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:09.034 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:09.034 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:09.034 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:09.034 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:09.034 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:09.034 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:09.034 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:09.034 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:09.034 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:09.034 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:09.034 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:09.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:09.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:37:09.034 00:37:09.034 --- 10.0.0.2 ping statistics --- 00:37:09.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:09.034 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:37:09.034 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:09.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:09.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:37:09.035 00:37:09.035 --- 10.0.0.1 ping statistics --- 00:37:09.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:09.035 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3137515 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3137515 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3137515 ']' 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:09.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:09.035 02:57:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:09.035 [2024-11-17 02:57:17.481701] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:37:09.035 [2024-11-17 02:57:17.481845] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:09.292 [2024-11-17 02:57:17.629959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:09.550 [2024-11-17 02:57:17.752821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:09.550 [2024-11-17 02:57:17.752891] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:09.550 [2024-11-17 02:57:17.752915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:09.550 [2024-11-17 02:57:17.752936] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:09.550 [2024-11-17 02:57:17.752955] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:09.550 [2024-11-17 02:57:17.755636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:09.550 [2024-11-17 02:57:17.755682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:09.550 [2024-11-17 02:57:17.755687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:10.117 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:10.117 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:10.117 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:10.117 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:10.117 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:10.117 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:10.117 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:10.117 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.117 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:10.117 [2024-11-17 02:57:18.507262] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:10.117 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.117 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:10.117 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.117 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:10.375 Malloc0 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:10.375 [2024-11-17 02:57:18.620777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:10.375 { 00:37:10.375 "params": { 00:37:10.375 "name": "Nvme$subsystem", 00:37:10.375 "trtype": "$TEST_TRANSPORT", 00:37:10.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:10.375 "adrfam": "ipv4", 00:37:10.375 "trsvcid": "$NVMF_PORT", 00:37:10.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:10.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:10.375 "hdgst": ${hdgst:-false}, 00:37:10.375 "ddgst": ${ddgst:-false} 00:37:10.375 }, 00:37:10.375 "method": "bdev_nvme_attach_controller" 00:37:10.375 } 00:37:10.375 EOF 00:37:10.375 )") 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:10.375 02:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:10.375 "params": { 00:37:10.375 "name": "Nvme1", 00:37:10.375 "trtype": "tcp", 00:37:10.375 "traddr": "10.0.0.2", 00:37:10.375 "adrfam": "ipv4", 00:37:10.375 "trsvcid": "4420", 00:37:10.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:10.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:10.375 "hdgst": false, 00:37:10.375 "ddgst": false 00:37:10.375 }, 00:37:10.375 "method": "bdev_nvme_attach_controller" 00:37:10.375 }' 00:37:10.375 [2024-11-17 02:57:18.713037] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:37:10.375 [2024-11-17 02:57:18.713200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3137674 ] 00:37:10.634 [2024-11-17 02:57:18.847746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:10.634 [2024-11-17 02:57:18.972869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.198 Running I/O for 1 seconds... 00:37:12.131 6077.00 IOPS, 23.74 MiB/s 00:37:12.131 Latency(us) 00:37:12.131 [2024-11-17T01:57:20.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:12.131 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:12.131 Verification LBA range: start 0x0 length 0x4000 00:37:12.131 Nvme1n1 : 1.02 6140.83 23.99 0.00 0.00 20749.85 4174.89 17961.72 00:37:12.131 [2024-11-17T01:57:20.591Z] =================================================================================================================== 00:37:12.131 [2024-11-17T01:57:20.591Z] Total : 6140.83 23.99 0.00 0.00 20749.85 4174.89 17961.72 00:37:13.066 02:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3137945 00:37:13.066 02:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:13.066 02:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:13.066 02:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:13.066 02:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:13.066 02:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:13.066 02:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:13.066 02:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:13.066 { 00:37:13.066 "params": { 00:37:13.066 "name": "Nvme$subsystem", 00:37:13.066 "trtype": "$TEST_TRANSPORT", 00:37:13.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:13.066 "adrfam": "ipv4", 00:37:13.066 "trsvcid": "$NVMF_PORT", 00:37:13.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:13.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:13.066 "hdgst": ${hdgst:-false}, 00:37:13.066 "ddgst": ${ddgst:-false} 00:37:13.066 }, 00:37:13.066 "method": "bdev_nvme_attach_controller" 00:37:13.066 } 00:37:13.066 EOF 00:37:13.066 )") 00:37:13.066 02:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:13.066 02:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:13.066 02:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:13.066 02:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:13.066 "params": { 00:37:13.066 "name": "Nvme1", 00:37:13.066 "trtype": "tcp", 00:37:13.066 "traddr": "10.0.0.2", 00:37:13.066 "adrfam": "ipv4", 00:37:13.066 "trsvcid": "4420", 00:37:13.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:13.066 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:13.066 "hdgst": false, 00:37:13.066 "ddgst": false 00:37:13.066 }, 00:37:13.066 "method": "bdev_nvme_attach_controller" 00:37:13.066 }' 00:37:13.066 [2024-11-17 02:57:21.403350] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:37:13.066 [2024-11-17 02:57:21.403486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3137945 ] 00:37:13.324 [2024-11-17 02:57:21.538715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:13.324 [2024-11-17 02:57:21.665554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:13.889 Running I/O for 15 seconds... 00:37:16.211 6297.00 IOPS, 24.60 MiB/s [2024-11-17T01:57:24.671Z] 6312.00 IOPS, 24.66 MiB/s [2024-11-17T01:57:24.671Z] 02:57:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3137515 00:37:16.211 02:57:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:16.211 [2024-11-17 02:57:24.345211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-11-17 02:57:24.345281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.345335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.345361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.345399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.345424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.345470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.345495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.345522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.345548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.345576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.345602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.345630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.345656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.345683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.345707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.345733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.345758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.345785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.345829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.345860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.345885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.345912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.345946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.345973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.345998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.346025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.346049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.346075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.346111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.346141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.346181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.346205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.346227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.346251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.346273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.346297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.346319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.346342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.346364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.346411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.346435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.346461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.346485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.346511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.346535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.346561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.346585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.346616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.346642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.346668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.346692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.346718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.211 [2024-11-17 02:57:24.346742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-11-17 02:57:24.346767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.346792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.346818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.346842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.346868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.346892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.346917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.346941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.346968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.346992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.347018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.347042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.347069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.347111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.347156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.347179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.347203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.347224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.347249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.347275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.347300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.347321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.347345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.347367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.347409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.347433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.347459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.347484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.347512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.347537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.347563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.347588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.347615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.347640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.347666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.347690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.347716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.347740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.347767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.347792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.347818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.347843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.347869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.347894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.347925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.347950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.347976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.348000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.348026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.348050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.348087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.348120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.348163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.348186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.348211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.348233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.348257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.348278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.348302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.348323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.348347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.348379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.348420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.348445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.348472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.348497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.348523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.348546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.348573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.348597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.348629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.348654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.348680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.348704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.348731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.348756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.348782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.348806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-11-17 02:57:24.348833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.212 [2024-11-17 02:57:24.348857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.348884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.213 [2024-11-17 02:57:24.348907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.348933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.213 [2024-11-17 02:57:24.348957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.348983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.213 [2024-11-17 02:57:24.349006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.349033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.213 [2024-11-17 02:57:24.349057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.349093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.213 [2024-11-17 02:57:24.349128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.349171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.213 [2024-11-17 02:57:24.349194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.349218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.213 [2024-11-17 02:57:24.349255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.349282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.213 [2024-11-17 02:57:24.349308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.349334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.213 [2024-11-17 02:57:24.349357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.349406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.213 [2024-11-17 02:57:24.349426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.349449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.213 [2024-11-17 02:57:24.349490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.349517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.213 [2024-11-17 02:57:24.349541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.349567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.213 [2024-11-17 02:57:24.349591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.349619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.213 [2024-11-17 02:57:24.349644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.349671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.349695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.349722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.349747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.349774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.349798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.349824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.349847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.349872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.349896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.349923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.349947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.349978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.350002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.350029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.350053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.350080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.350112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.350141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.350180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.350204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.350226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.350249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.350271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.350294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.350317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.350341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.350363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.350403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.350428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.350461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.350486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.350513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.350536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.350562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.350586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.350612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.350641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.350668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.350692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.350719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.350743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.350770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.350794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.350820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:104248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.350844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.350871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.350895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.350921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.350945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.350971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.350995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.351021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.351045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.351071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.351102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.351132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.351171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.351194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.351215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.351237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.351259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.351287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.351309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.351331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.351352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.351389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.351409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.351431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.351469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.351496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.351520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.351546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.351569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.351595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.351621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.351647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.351672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.351698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.351722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.351749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.351773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.351800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.351824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.351850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.351874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.351900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.351929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.351957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.351981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.352007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.213 [2024-11-17 02:57:24.352032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.352056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:37:16.213 [2024-11-17 02:57:24.352102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:16.213 [2024-11-17 02:57:24.352142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:16.213 [2024-11-17 02:57:24.352163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104440 len:8 PRP1 0x0 PRP2 0x0 00:37:16.213 [2024-11-17 02:57:24.352184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.213 [2024-11-17 02:57:24.352604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:16.213 [2024-11-17 02:57:24.352639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.214 [2024-11-17 02:57:24.352667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:16.214 [2024-11-17 02:57:24.352691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.214 [2024-11-17 02:57:24.352714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:16.214 [2024-11-17 02:57:24.352737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.214 [2024-11-17 02:57:24.352760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:16.214 [2024-11-17 02:57:24.352783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.214 [2024-11-17 02:57:24.352805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.214 [2024-11-17 02:57:24.357085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.214 [2024-11-17 02:57:24.357172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.214 [2024-11-17 02:57:24.358019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-11-17 02:57:24.358066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.214 [2024-11-17 02:57:24.358105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.214 [2024-11-17 02:57:24.358410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.214 [2024-11-17 02:57:24.358702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.214 [2024-11-17 02:57:24.358752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.214 [2024-11-17 02:57:24.358786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.214 [2024-11-17 02:57:24.358815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.214 [2024-11-17 02:57:24.371836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.214 [2024-11-17 02:57:24.372324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-11-17 02:57:24.372367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.214 [2024-11-17 02:57:24.372393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.214 [2024-11-17 02:57:24.372679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.214 [2024-11-17 02:57:24.372967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.214 [2024-11-17 02:57:24.372998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.214 [2024-11-17 02:57:24.373020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.214 [2024-11-17 02:57:24.373042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.214 [2024-11-17 02:57:24.386338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.214 [2024-11-17 02:57:24.386819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-11-17 02:57:24.386860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.214 [2024-11-17 02:57:24.386887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.214 [2024-11-17 02:57:24.387187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.214 [2024-11-17 02:57:24.387476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.214 [2024-11-17 02:57:24.387507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.214 [2024-11-17 02:57:24.387529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.214 [2024-11-17 02:57:24.387551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.214 [2024-11-17 02:57:24.400797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.214 [2024-11-17 02:57:24.401234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-11-17 02:57:24.401276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.214 [2024-11-17 02:57:24.401302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.214 [2024-11-17 02:57:24.401589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.214 [2024-11-17 02:57:24.401879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.214 [2024-11-17 02:57:24.401910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.214 [2024-11-17 02:57:24.401933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.214 [2024-11-17 02:57:24.401955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.214 [2024-11-17 02:57:24.415201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.214 [2024-11-17 02:57:24.415647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-11-17 02:57:24.415689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.214 [2024-11-17 02:57:24.415715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.214 [2024-11-17 02:57:24.415997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.214 [2024-11-17 02:57:24.416295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.214 [2024-11-17 02:57:24.416328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.214 [2024-11-17 02:57:24.416350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.214 [2024-11-17 02:57:24.416372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.214 [2024-11-17 02:57:24.429776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.214 [2024-11-17 02:57:24.430244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-11-17 02:57:24.430296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.214 [2024-11-17 02:57:24.430320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.214 [2024-11-17 02:57:24.430620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.214 [2024-11-17 02:57:24.430907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.214 [2024-11-17 02:57:24.430938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.214 [2024-11-17 02:57:24.430961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.214 [2024-11-17 02:57:24.430983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.214 [2024-11-17 02:57:24.444131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.214 [2024-11-17 02:57:24.444578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-11-17 02:57:24.444618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.214 [2024-11-17 02:57:24.444644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.214 [2024-11-17 02:57:24.444928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.214 [2024-11-17 02:57:24.445226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.214 [2024-11-17 02:57:24.445258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.214 [2024-11-17 02:57:24.445280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.214 [2024-11-17 02:57:24.445302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.214 [2024-11-17 02:57:24.458671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.214 [2024-11-17 02:57:24.459135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-11-17 02:57:24.459183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.214 [2024-11-17 02:57:24.459210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.214 [2024-11-17 02:57:24.459495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.214 [2024-11-17 02:57:24.459781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.214 [2024-11-17 02:57:24.459812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.214 [2024-11-17 02:57:24.459834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.214 [2024-11-17 02:57:24.459856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.214 [2024-11-17 02:57:24.473252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.214 [2024-11-17 02:57:24.473675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-11-17 02:57:24.473715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.214 [2024-11-17 02:57:24.473741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.214 [2024-11-17 02:57:24.474026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.214 [2024-11-17 02:57:24.474322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.214 [2024-11-17 02:57:24.474354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.214 [2024-11-17 02:57:24.474377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.214 [2024-11-17 02:57:24.474398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.214 [2024-11-17 02:57:24.487784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.214 [2024-11-17 02:57:24.488270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-11-17 02:57:24.488311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.214 [2024-11-17 02:57:24.488338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.214 [2024-11-17 02:57:24.488620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.214 [2024-11-17 02:57:24.488909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.214 [2024-11-17 02:57:24.488940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.214 [2024-11-17 02:57:24.488962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.214 [2024-11-17 02:57:24.488984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.214 [2024-11-17 02:57:24.502329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.214 [2024-11-17 02:57:24.502781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-11-17 02:57:24.502823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.214 [2024-11-17 02:57:24.502849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.214 [2024-11-17 02:57:24.503153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.214 [2024-11-17 02:57:24.503439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.214 [2024-11-17 02:57:24.503470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.214 [2024-11-17 02:57:24.503492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.214 [2024-11-17 02:57:24.503513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.214 [2024-11-17 02:57:24.516864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.214 [2024-11-17 02:57:24.517342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-11-17 02:57:24.517384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.214 [2024-11-17 02:57:24.517411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.214 [2024-11-17 02:57:24.517695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.214 [2024-11-17 02:57:24.518010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.214 [2024-11-17 02:57:24.518042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.214 [2024-11-17 02:57:24.518065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.214 [2024-11-17 02:57:24.518087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.214 [2024-11-17 02:57:24.531238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.214 [2024-11-17 02:57:24.531780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-11-17 02:57:24.531833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.214 [2024-11-17 02:57:24.531857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.214 [2024-11-17 02:57:24.532178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.214 [2024-11-17 02:57:24.532463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.214 [2024-11-17 02:57:24.532494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.214 [2024-11-17 02:57:24.532517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.214 [2024-11-17 02:57:24.532539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.214 [2024-11-17 02:57:24.545681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.214 [2024-11-17 02:57:24.546138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-11-17 02:57:24.546178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.214 [2024-11-17 02:57:24.546204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.214 [2024-11-17 02:57:24.546487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.214 [2024-11-17 02:57:24.546771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.214 [2024-11-17 02:57:24.546807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.214 [2024-11-17 02:57:24.546831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.214 [2024-11-17 02:57:24.546853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.214 [2024-11-17 02:57:24.560192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.214 [2024-11-17 02:57:24.560614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-11-17 02:57:24.560654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.214 [2024-11-17 02:57:24.560680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.214 [2024-11-17 02:57:24.560963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.214 [2024-11-17 02:57:24.561259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.214 [2024-11-17 02:57:24.561291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.214 [2024-11-17 02:57:24.561314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.214 [2024-11-17 02:57:24.561352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.214 [2024-11-17 02:57:24.574679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.214 [2024-11-17 02:57:24.575211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-11-17 02:57:24.575247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.215 [2024-11-17 02:57:24.575270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.215 [2024-11-17 02:57:24.575570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.215 [2024-11-17 02:57:24.575853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.215 [2024-11-17 02:57:24.575884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.215 [2024-11-17 02:57:24.575906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.215 [2024-11-17 02:57:24.575927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.215 [2024-11-17 02:57:24.589248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.215 [2024-11-17 02:57:24.589716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-11-17 02:57:24.589766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.215 [2024-11-17 02:57:24.589791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.215 [2024-11-17 02:57:24.590094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.215 [2024-11-17 02:57:24.590392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.215 [2024-11-17 02:57:24.590424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.215 [2024-11-17 02:57:24.590446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.215 [2024-11-17 02:57:24.590472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.215 [2024-11-17 02:57:24.603577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.215 [2024-11-17 02:57:24.604121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-11-17 02:57:24.604161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.215 [2024-11-17 02:57:24.604186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.215 [2024-11-17 02:57:24.604482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.215 [2024-11-17 02:57:24.604781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.215 [2024-11-17 02:57:24.604815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.215 [2024-11-17 02:57:24.604838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.215 [2024-11-17 02:57:24.604861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.215 [2024-11-17 02:57:24.618208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.215 [2024-11-17 02:57:24.618694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-11-17 02:57:24.618738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.215 [2024-11-17 02:57:24.618764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.215 [2024-11-17 02:57:24.619035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.215 [2024-11-17 02:57:24.619354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.215 [2024-11-17 02:57:24.619389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.215 [2024-11-17 02:57:24.619413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.215 [2024-11-17 02:57:24.619435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.215 [2024-11-17 02:57:24.632781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.215 [2024-11-17 02:57:24.633238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-11-17 02:57:24.633280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.215 [2024-11-17 02:57:24.633306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.215 [2024-11-17 02:57:24.633588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.215 [2024-11-17 02:57:24.633872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.215 [2024-11-17 02:57:24.633904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.215 [2024-11-17 02:57:24.633928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.215 [2024-11-17 02:57:24.633949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.215 [2024-11-17 02:57:24.647367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.215 [2024-11-17 02:57:24.647893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-11-17 02:57:24.647935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.215 [2024-11-17 02:57:24.647963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.215 [2024-11-17 02:57:24.648262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.215 [2024-11-17 02:57:24.648548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.215 [2024-11-17 02:57:24.648579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.215 [2024-11-17 02:57:24.648602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.215 [2024-11-17 02:57:24.648624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.215 [2024-11-17 02:57:24.661798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.215 [2024-11-17 02:57:24.662231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-11-17 02:57:24.662286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.215 [2024-11-17 02:57:24.662309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.215 [2024-11-17 02:57:24.662644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.215 [2024-11-17 02:57:24.662930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.215 [2024-11-17 02:57:24.662962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.215 [2024-11-17 02:57:24.662984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.215 [2024-11-17 02:57:24.663005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.474 [2024-11-17 02:57:24.676343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.474 [2024-11-17 02:57:24.676763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.474 [2024-11-17 02:57:24.676804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.474 [2024-11-17 02:57:24.676830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.474 [2024-11-17 02:57:24.677123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.474 [2024-11-17 02:57:24.677409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.474 [2024-11-17 02:57:24.677440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.474 [2024-11-17 02:57:24.677463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.474 [2024-11-17 02:57:24.677485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.474 [2024-11-17 02:57:24.690846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.474 [2024-11-17 02:57:24.691318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.474 [2024-11-17 02:57:24.691359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.474 [2024-11-17 02:57:24.691391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.474 [2024-11-17 02:57:24.691674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.474 [2024-11-17 02:57:24.691959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.474 [2024-11-17 02:57:24.691990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.474 [2024-11-17 02:57:24.692013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.474 [2024-11-17 02:57:24.692035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.474 [2024-11-17 02:57:24.705365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.474 [2024-11-17 02:57:24.705815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.474 [2024-11-17 02:57:24.705851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.474 [2024-11-17 02:57:24.705875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.474 [2024-11-17 02:57:24.706186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.474 [2024-11-17 02:57:24.706458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.474 [2024-11-17 02:57:24.706484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.474 [2024-11-17 02:57:24.706503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.475 [2024-11-17 02:57:24.706522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.475 [2024-11-17 02:57:24.719900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.475 [2024-11-17 02:57:24.720330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.475 [2024-11-17 02:57:24.720372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.475 [2024-11-17 02:57:24.720397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.475 [2024-11-17 02:57:24.720680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.475 [2024-11-17 02:57:24.720966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.475 [2024-11-17 02:57:24.720997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.475 [2024-11-17 02:57:24.721019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.475 [2024-11-17 02:57:24.721040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.475 [2024-11-17 02:57:24.734436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.475 [2024-11-17 02:57:24.734857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.475 [2024-11-17 02:57:24.734898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.475 [2024-11-17 02:57:24.734924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.475 [2024-11-17 02:57:24.735226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.475 [2024-11-17 02:57:24.735519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.475 [2024-11-17 02:57:24.735551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.475 [2024-11-17 02:57:24.735573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.475 [2024-11-17 02:57:24.735595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.475 [2024-11-17 02:57:24.748962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.475 [2024-11-17 02:57:24.749442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.475 [2024-11-17 02:57:24.749483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.475 [2024-11-17 02:57:24.749509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.475 [2024-11-17 02:57:24.749791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.475 [2024-11-17 02:57:24.750075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.475 [2024-11-17 02:57:24.750118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.475 [2024-11-17 02:57:24.750143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.475 [2024-11-17 02:57:24.750164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.475 [2024-11-17 02:57:24.763526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.475 [2024-11-17 02:57:24.763993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.475 [2024-11-17 02:57:24.764034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.475 [2024-11-17 02:57:24.764060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.475 [2024-11-17 02:57:24.764355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.475 [2024-11-17 02:57:24.764640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.475 [2024-11-17 02:57:24.764671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.475 [2024-11-17 02:57:24.764694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.475 [2024-11-17 02:57:24.764715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.475 [2024-11-17 02:57:24.778047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.475 [2024-11-17 02:57:24.778492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.475 [2024-11-17 02:57:24.778533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.475 [2024-11-17 02:57:24.778559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.475 [2024-11-17 02:57:24.778843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.475 [2024-11-17 02:57:24.779142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.475 [2024-11-17 02:57:24.779174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.475 [2024-11-17 02:57:24.779203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.475 [2024-11-17 02:57:24.779226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.475 [2024-11-17 02:57:24.792588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.475 [2024-11-17 02:57:24.793021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.475 [2024-11-17 02:57:24.793073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.475 [2024-11-17 02:57:24.793104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.475 [2024-11-17 02:57:24.793421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.475 [2024-11-17 02:57:24.793708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.475 [2024-11-17 02:57:24.793739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.475 [2024-11-17 02:57:24.793761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.475 [2024-11-17 02:57:24.793783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.475 [2024-11-17 02:57:24.807047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.475 [2024-11-17 02:57:24.807527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.475 [2024-11-17 02:57:24.807568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.475 [2024-11-17 02:57:24.807595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.475 [2024-11-17 02:57:24.807877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.475 [2024-11-17 02:57:24.808175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.475 [2024-11-17 02:57:24.808207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.475 [2024-11-17 02:57:24.808230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.475 [2024-11-17 02:57:24.808252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.475 [2024-11-17 02:57:24.821613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.475 [2024-11-17 02:57:24.822068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.475 [2024-11-17 02:57:24.822117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.475 [2024-11-17 02:57:24.822145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.475 [2024-11-17 02:57:24.822428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.475 [2024-11-17 02:57:24.822714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.475 [2024-11-17 02:57:24.822745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.475 [2024-11-17 02:57:24.822768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.475 [2024-11-17 02:57:24.822796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.475 [2024-11-17 02:57:24.836153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.475 [2024-11-17 02:57:24.836593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.475 [2024-11-17 02:57:24.836634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.475 [2024-11-17 02:57:24.836660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.475 [2024-11-17 02:57:24.836944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.475 [2024-11-17 02:57:24.837241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.475 [2024-11-17 02:57:24.837273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.475 [2024-11-17 02:57:24.837296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.475 [2024-11-17 02:57:24.837317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.475 [2024-11-17 02:57:24.850646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.475 [2024-11-17 02:57:24.851111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.475 [2024-11-17 02:57:24.851168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.475 [2024-11-17 02:57:24.851194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.475 [2024-11-17 02:57:24.851475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.475 [2024-11-17 02:57:24.851761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.476 [2024-11-17 02:57:24.851792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.476 [2024-11-17 02:57:24.851814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.476 [2024-11-17 02:57:24.851835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.476 [2024-11-17 02:57:24.865189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.476 [2024-11-17 02:57:24.865681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.476 [2024-11-17 02:57:24.865722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.476 [2024-11-17 02:57:24.865749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.476 [2024-11-17 02:57:24.866046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.476 [2024-11-17 02:57:24.866394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.476 [2024-11-17 02:57:24.866429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.476 [2024-11-17 02:57:24.866452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.476 [2024-11-17 02:57:24.866474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.476 [2024-11-17 02:57:24.879646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.476 [2024-11-17 02:57:24.880133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.476 [2024-11-17 02:57:24.880173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.476 [2024-11-17 02:57:24.880197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.476 [2024-11-17 02:57:24.880492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.476 [2024-11-17 02:57:24.880778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.476 [2024-11-17 02:57:24.880809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.476 [2024-11-17 02:57:24.880832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.476 [2024-11-17 02:57:24.880854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.476 [2024-11-17 02:57:24.894046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.476 [2024-11-17 02:57:24.894502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.476 [2024-11-17 02:57:24.894561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.476 [2024-11-17 02:57:24.894588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.476 [2024-11-17 02:57:24.894871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.476 [2024-11-17 02:57:24.895173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.476 [2024-11-17 02:57:24.895205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.476 [2024-11-17 02:57:24.895228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.476 [2024-11-17 02:57:24.895250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.476 [2024-11-17 02:57:24.908585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.476 [2024-11-17 02:57:24.909018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.476 [2024-11-17 02:57:24.909058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.476 [2024-11-17 02:57:24.909085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.476 [2024-11-17 02:57:24.909378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.476 [2024-11-17 02:57:24.909663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.476 [2024-11-17 02:57:24.909694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.476 [2024-11-17 02:57:24.909717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.476 [2024-11-17 02:57:24.909738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.476 [2024-11-17 02:57:24.923092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.476 [2024-11-17 02:57:24.923573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.476 [2024-11-17 02:57:24.923608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.476 [2024-11-17 02:57:24.923652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.476 [2024-11-17 02:57:24.923943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.476 [2024-11-17 02:57:24.924241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.476 [2024-11-17 02:57:24.924274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.476 [2024-11-17 02:57:24.924296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.476 [2024-11-17 02:57:24.924318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.735 [2024-11-17 02:57:24.937628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.735 [2024-11-17 02:57:24.938102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.735 [2024-11-17 02:57:24.938140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.735 [2024-11-17 02:57:24.938163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.735 [2024-11-17 02:57:24.938436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.735 [2024-11-17 02:57:24.938721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.735 [2024-11-17 02:57:24.938752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.735 [2024-11-17 02:57:24.938775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.735 [2024-11-17 02:57:24.938797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.735 [2024-11-17 02:57:24.952142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.735 [2024-11-17 02:57:24.952557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.735 [2024-11-17 02:57:24.952609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.735 [2024-11-17 02:57:24.952631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.735 [2024-11-17 02:57:24.952881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.735 [2024-11-17 02:57:24.953195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.735 [2024-11-17 02:57:24.953227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.735 [2024-11-17 02:57:24.953250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.735 [2024-11-17 02:57:24.953271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.735 [2024-11-17 02:57:24.966602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.735 [2024-11-17 02:57:24.967024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.735 [2024-11-17 02:57:24.967075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.735 [2024-11-17 02:57:24.967113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.735 [2024-11-17 02:57:24.967398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.735 [2024-11-17 02:57:24.967689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.735 [2024-11-17 02:57:24.967720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.735 [2024-11-17 02:57:24.967742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.735 [2024-11-17 02:57:24.967764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.735 [2024-11-17 02:57:24.981179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.735 [2024-11-17 02:57:24.981633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.735 [2024-11-17 02:57:24.981674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.735 [2024-11-17 02:57:24.981715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.735 [2024-11-17 02:57:24.982000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.735 [2024-11-17 02:57:24.982296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.735 [2024-11-17 02:57:24.982328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.735 [2024-11-17 02:57:24.982351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.735 [2024-11-17 02:57:24.982372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.735 [2024-11-17 02:57:24.995725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.735 [2024-11-17 02:57:24.996186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.735 [2024-11-17 02:57:24.996228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.735 [2024-11-17 02:57:24.996254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.735 [2024-11-17 02:57:24.996538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.735 [2024-11-17 02:57:24.996822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.735 [2024-11-17 02:57:24.996853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.735 [2024-11-17 02:57:24.996875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.735 [2024-11-17 02:57:24.996897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.735 [2024-11-17 02:57:25.010243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.736 [2024-11-17 02:57:25.010717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.736 [2024-11-17 02:57:25.010768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.736 [2024-11-17 02:57:25.010792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.736 [2024-11-17 02:57:25.011086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.736 [2024-11-17 02:57:25.011383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.736 [2024-11-17 02:57:25.011414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.736 [2024-11-17 02:57:25.011443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.736 [2024-11-17 02:57:25.011466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.736 [2024-11-17 02:57:25.024650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.736 [2024-11-17 02:57:25.025103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.736 [2024-11-17 02:57:25.025145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.736 [2024-11-17 02:57:25.025171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.736 [2024-11-17 02:57:25.025454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.736 [2024-11-17 02:57:25.025739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.736 [2024-11-17 02:57:25.025770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.736 [2024-11-17 02:57:25.025793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.736 [2024-11-17 02:57:25.025815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.736 [2024-11-17 02:57:25.039165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.736 [2024-11-17 02:57:25.039637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.736 [2024-11-17 02:57:25.039677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.736 [2024-11-17 02:57:25.039704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.736 [2024-11-17 02:57:25.039986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.736 [2024-11-17 02:57:25.040285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.736 [2024-11-17 02:57:25.040317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.736 [2024-11-17 02:57:25.040340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.736 [2024-11-17 02:57:25.040362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.736 [2024-11-17 02:57:25.053700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.736 [2024-11-17 02:57:25.054150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.736 [2024-11-17 02:57:25.054192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.736 [2024-11-17 02:57:25.054218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.736 [2024-11-17 02:57:25.054503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.736 [2024-11-17 02:57:25.054788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.736 [2024-11-17 02:57:25.054819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.736 [2024-11-17 02:57:25.054841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.736 [2024-11-17 02:57:25.054863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.736 [2024-11-17 02:57:25.068255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.736 [2024-11-17 02:57:25.068727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.736 [2024-11-17 02:57:25.068768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.736 [2024-11-17 02:57:25.068794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.736 [2024-11-17 02:57:25.069077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.736 [2024-11-17 02:57:25.069372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.736 [2024-11-17 02:57:25.069403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.736 [2024-11-17 02:57:25.069425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.736 [2024-11-17 02:57:25.069446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.736 [2024-11-17 02:57:25.082791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.736 [2024-11-17 02:57:25.083250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.736 [2024-11-17 02:57:25.083290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.736 [2024-11-17 02:57:25.083316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.736 [2024-11-17 02:57:25.083598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.736 [2024-11-17 02:57:25.083882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.736 [2024-11-17 02:57:25.083913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.736 [2024-11-17 02:57:25.083936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.736 [2024-11-17 02:57:25.083958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.736 [2024-11-17 02:57:25.097323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.736 [2024-11-17 02:57:25.097770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.736 [2024-11-17 02:57:25.097810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.736 [2024-11-17 02:57:25.097837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.736 [2024-11-17 02:57:25.098136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.736 [2024-11-17 02:57:25.098420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.736 [2024-11-17 02:57:25.098451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.736 [2024-11-17 02:57:25.098473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.736 [2024-11-17 02:57:25.098495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.736 [2024-11-17 02:57:25.111877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.736 [2024-11-17 02:57:25.112329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.736 [2024-11-17 02:57:25.112375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.736 [2024-11-17 02:57:25.112403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.736 [2024-11-17 02:57:25.112686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.736 [2024-11-17 02:57:25.112971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.736 [2024-11-17 02:57:25.113002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.736 [2024-11-17 02:57:25.113024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.736 [2024-11-17 02:57:25.113045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.736 [2024-11-17 02:57:25.126453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.736 [2024-11-17 02:57:25.126953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.736 [2024-11-17 02:57:25.126995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.736 [2024-11-17 02:57:25.127022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.736 [2024-11-17 02:57:25.127363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.736 [2024-11-17 02:57:25.127662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.736 [2024-11-17 02:57:25.127695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.736 [2024-11-17 02:57:25.127719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.736 [2024-11-17 02:57:25.127741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.736 [2024-11-17 02:57:25.140924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.736 [2024-11-17 02:57:25.141378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.736 [2024-11-17 02:57:25.141422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.736 [2024-11-17 02:57:25.141449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.736 [2024-11-17 02:57:25.141733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.736 [2024-11-17 02:57:25.142018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.736 [2024-11-17 02:57:25.142049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.736 [2024-11-17 02:57:25.142072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.737 [2024-11-17 02:57:25.142094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.737 [2024-11-17 02:57:25.155498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.737 [2024-11-17 02:57:25.155934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.737 [2024-11-17 02:57:25.155987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.737 [2024-11-17 02:57:25.156010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.737 [2024-11-17 02:57:25.156345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.737 [2024-11-17 02:57:25.156628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.737 [2024-11-17 02:57:25.156659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.737 [2024-11-17 02:57:25.156683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.737 [2024-11-17 02:57:25.156704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.737 [2024-11-17 02:57:25.170009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.737 [2024-11-17 02:57:25.170422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.737 [2024-11-17 02:57:25.170462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.737 [2024-11-17 02:57:25.170489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.737 [2024-11-17 02:57:25.170771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.737 [2024-11-17 02:57:25.171056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.737 [2024-11-17 02:57:25.171087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.737 [2024-11-17 02:57:25.171123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.737 [2024-11-17 02:57:25.171147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.737 [2024-11-17 02:57:25.184529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.737 [2024-11-17 02:57:25.184986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.737 [2024-11-17 02:57:25.185028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.737 [2024-11-17 02:57:25.185055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.737 [2024-11-17 02:57:25.185348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.737 [2024-11-17 02:57:25.185646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.737 [2024-11-17 02:57:25.185677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.737 [2024-11-17 02:57:25.185700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.737 [2024-11-17 02:57:25.185721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.996 [2024-11-17 02:57:25.198875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.996 [2024-11-17 02:57:25.199339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.996 [2024-11-17 02:57:25.199380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.996 [2024-11-17 02:57:25.199407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.996 [2024-11-17 02:57:25.199688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.996 [2024-11-17 02:57:25.199974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.996 [2024-11-17 02:57:25.200011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.996 [2024-11-17 02:57:25.200036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.996 [2024-11-17 02:57:25.200057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.996 [2024-11-17 02:57:25.213482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.996 [2024-11-17 02:57:25.213938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.996 [2024-11-17 02:57:25.213979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.996 [2024-11-17 02:57:25.214005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.996 [2024-11-17 02:57:25.214298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.996 [2024-11-17 02:57:25.214583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.996 [2024-11-17 02:57:25.214614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.996 [2024-11-17 02:57:25.214636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.996 [2024-11-17 02:57:25.214658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.996 [2024-11-17 02:57:25.228038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.996 [2024-11-17 02:57:25.228498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.996 [2024-11-17 02:57:25.228538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.996 [2024-11-17 02:57:25.228564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.996 [2024-11-17 02:57:25.228846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.996 [2024-11-17 02:57:25.229145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.996 [2024-11-17 02:57:25.229178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.996 [2024-11-17 02:57:25.229200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.996 [2024-11-17 02:57:25.229222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.996 [2024-11-17 02:57:25.242590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.996 [2024-11-17 02:57:25.243036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.996 [2024-11-17 02:57:25.243076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.996 [2024-11-17 02:57:25.243115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.996 [2024-11-17 02:57:25.243401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.996 [2024-11-17 02:57:25.243686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.997 [2024-11-17 02:57:25.243717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.997 [2024-11-17 02:57:25.243740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.997 [2024-11-17 02:57:25.243768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.997 [2024-11-17 02:57:25.256960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.997 [2024-11-17 02:57:25.257402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.997 [2024-11-17 02:57:25.257480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.997 [2024-11-17 02:57:25.257508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.997 [2024-11-17 02:57:25.257791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.997 [2024-11-17 02:57:25.258076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.997 [2024-11-17 02:57:25.258123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.997 [2024-11-17 02:57:25.258147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.997 [2024-11-17 02:57:25.258169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.997 4336.00 IOPS, 16.94 MiB/s [2024-11-17T01:57:25.457Z] [2024-11-17 02:57:25.271472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.997 [2024-11-17 02:57:25.271941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.997 [2024-11-17 02:57:25.271984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.997 [2024-11-17 02:57:25.272010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.997 [2024-11-17 02:57:25.272306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.997 [2024-11-17 02:57:25.272590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.997 [2024-11-17 02:57:25.272621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.997 [2024-11-17 02:57:25.272643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.997 [2024-11-17 02:57:25.272665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.997 [2024-11-17 02:57:25.286031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.997 [2024-11-17 02:57:25.286543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.997 [2024-11-17 02:57:25.286592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.997 [2024-11-17 02:57:25.286616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.997 [2024-11-17 02:57:25.286903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.997 [2024-11-17 02:57:25.287203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.997 [2024-11-17 02:57:25.287235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.997 [2024-11-17 02:57:25.287257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.997 [2024-11-17 02:57:25.287279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.997 [2024-11-17 02:57:25.300432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.997 [2024-11-17 02:57:25.300892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.997 [2024-11-17 02:57:25.300933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.997 [2024-11-17 02:57:25.300959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.997 [2024-11-17 02:57:25.301252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.997 [2024-11-17 02:57:25.301538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.997 [2024-11-17 02:57:25.301569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.997 [2024-11-17 02:57:25.301592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.997 [2024-11-17 02:57:25.301614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.997 [2024-11-17 02:57:25.314989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.997 [2024-11-17 02:57:25.315460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.997 [2024-11-17 02:57:25.315501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.997 [2024-11-17 02:57:25.315527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.997 [2024-11-17 02:57:25.315809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.997 [2024-11-17 02:57:25.316093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.997 [2024-11-17 02:57:25.316137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.997 [2024-11-17 02:57:25.316160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.997 [2024-11-17 02:57:25.316182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.997 [2024-11-17 02:57:25.329354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.997 [2024-11-17 02:57:25.329887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.997 [2024-11-17 02:57:25.329946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.997 [2024-11-17 02:57:25.329973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.997 [2024-11-17 02:57:25.330267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.997 [2024-11-17 02:57:25.330553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.997 [2024-11-17 02:57:25.330584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.997 [2024-11-17 02:57:25.330607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.997 [2024-11-17 02:57:25.330628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.997 [2024-11-17 02:57:25.343754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.997 [2024-11-17 02:57:25.344227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.997 [2024-11-17 02:57:25.344274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.997 [2024-11-17 02:57:25.344309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.997 [2024-11-17 02:57:25.344594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.997 [2024-11-17 02:57:25.344878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.997 [2024-11-17 02:57:25.344910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.997 [2024-11-17 02:57:25.344932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.997 [2024-11-17 02:57:25.344953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.997 [2024-11-17 02:57:25.358305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.997 [2024-11-17 02:57:25.358780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.997 [2024-11-17 02:57:25.358820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.997 [2024-11-17 02:57:25.358847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.997 [2024-11-17 02:57:25.359152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.997 [2024-11-17 02:57:25.359437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.997 [2024-11-17 02:57:25.359468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.997 [2024-11-17 02:57:25.359491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.997 [2024-11-17 02:57:25.359512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.997 [2024-11-17 02:57:25.372860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.997 [2024-11-17 02:57:25.373417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.997 [2024-11-17 02:57:25.373474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.997 [2024-11-17 02:57:25.373500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.997 [2024-11-17 02:57:25.373780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.997 [2024-11-17 02:57:25.374065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.997 [2024-11-17 02:57:25.374107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.997 [2024-11-17 02:57:25.374133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.997 [2024-11-17 02:57:25.374155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.997 [2024-11-17 02:57:25.387283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.997 [2024-11-17 02:57:25.387753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.998 [2024-11-17 02:57:25.387795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.998 [2024-11-17 02:57:25.387823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.998 [2024-11-17 02:57:25.388160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.998 [2024-11-17 02:57:25.388462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.998 [2024-11-17 02:57:25.388496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.998 [2024-11-17 02:57:25.388563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.998 [2024-11-17 02:57:25.388589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.998 [2024-11-17 02:57:25.401793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.998 [2024-11-17 02:57:25.402238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.998 [2024-11-17 02:57:25.402281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.998 [2024-11-17 02:57:25.402308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.998 [2024-11-17 02:57:25.402592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.998 [2024-11-17 02:57:25.402876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.998 [2024-11-17 02:57:25.402907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.998 [2024-11-17 02:57:25.402929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.998 [2024-11-17 02:57:25.402951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.998 [2024-11-17 02:57:25.416386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.998 [2024-11-17 02:57:25.416859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.998 [2024-11-17 02:57:25.416901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.998 [2024-11-17 02:57:25.416928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.998 [2024-11-17 02:57:25.417225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.998 [2024-11-17 02:57:25.417511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.998 [2024-11-17 02:57:25.417542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.998 [2024-11-17 02:57:25.417564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.998 [2024-11-17 02:57:25.417585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.998 [2024-11-17 02:57:25.430739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.998 [2024-11-17 02:57:25.431180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.998 [2024-11-17 02:57:25.431229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.998 [2024-11-17 02:57:25.431256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.998 [2024-11-17 02:57:25.431540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.998 [2024-11-17 02:57:25.431825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.998 [2024-11-17 02:57:25.431862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.998 [2024-11-17 02:57:25.431886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.998 [2024-11-17 02:57:25.431907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.998 [2024-11-17 02:57:25.445268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.998 [2024-11-17 02:57:25.445710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.998 [2024-11-17 02:57:25.445752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.998 [2024-11-17 02:57:25.445778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.998 [2024-11-17 02:57:25.446061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.998 [2024-11-17 02:57:25.446356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.998 [2024-11-17 02:57:25.446387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.998 [2024-11-17 02:57:25.446410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.998 [2024-11-17 02:57:25.446432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.257 [2024-11-17 02:57:25.459803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.257 [2024-11-17 02:57:25.460261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.257 [2024-11-17 02:57:25.460303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.257 [2024-11-17 02:57:25.460329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.257 [2024-11-17 02:57:25.460612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.257 [2024-11-17 02:57:25.460897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.257 [2024-11-17 02:57:25.460928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.257 [2024-11-17 02:57:25.460950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.257 [2024-11-17 02:57:25.460973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.257 [2024-11-17 02:57:25.474339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.257 [2024-11-17 02:57:25.474785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.257 [2024-11-17 02:57:25.474825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.257 [2024-11-17 02:57:25.474851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.257 [2024-11-17 02:57:25.475148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.257 [2024-11-17 02:57:25.475431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.257 [2024-11-17 02:57:25.475463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.257 [2024-11-17 02:57:25.475485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.257 [2024-11-17 02:57:25.475512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.257 [2024-11-17 02:57:25.488876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.257 [2024-11-17 02:57:25.489332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.257 [2024-11-17 02:57:25.489373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.257 [2024-11-17 02:57:25.489399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.258 [2024-11-17 02:57:25.489679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.258 [2024-11-17 02:57:25.489965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.258 [2024-11-17 02:57:25.489995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.258 [2024-11-17 02:57:25.490017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.258 [2024-11-17 02:57:25.490039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.258 [2024-11-17 02:57:25.503420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.258 [2024-11-17 02:57:25.503882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.258 [2024-11-17 02:57:25.503923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.258 [2024-11-17 02:57:25.503948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.258 [2024-11-17 02:57:25.504245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.258 [2024-11-17 02:57:25.504529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.258 [2024-11-17 02:57:25.504560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.258 [2024-11-17 02:57:25.504583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.258 [2024-11-17 02:57:25.504604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.258 [2024-11-17 02:57:25.517943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.258 [2024-11-17 02:57:25.518373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.258 [2024-11-17 02:57:25.518414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.258 [2024-11-17 02:57:25.518439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.258 [2024-11-17 02:57:25.518720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.258 [2024-11-17 02:57:25.519004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.258 [2024-11-17 02:57:25.519035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.258 [2024-11-17 02:57:25.519057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.258 [2024-11-17 02:57:25.519079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.258 [2024-11-17 02:57:25.532478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.258 [2024-11-17 02:57:25.532958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.258 [2024-11-17 02:57:25.532999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.258 [2024-11-17 02:57:25.533025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.258 [2024-11-17 02:57:25.533325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.258 [2024-11-17 02:57:25.533611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.258 [2024-11-17 02:57:25.533641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.258 [2024-11-17 02:57:25.533664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.258 [2024-11-17 02:57:25.533685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.258 [2024-11-17 02:57:25.547015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.258 [2024-11-17 02:57:25.547554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.258 [2024-11-17 02:57:25.547613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.258 [2024-11-17 02:57:25.547639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.258 [2024-11-17 02:57:25.547921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.258 [2024-11-17 02:57:25.548219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.258 [2024-11-17 02:57:25.548251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.258 [2024-11-17 02:57:25.548274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.258 [2024-11-17 02:57:25.548295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.258 [2024-11-17 02:57:25.561434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.258 [2024-11-17 02:57:25.561860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.258 [2024-11-17 02:57:25.561901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.258 [2024-11-17 02:57:25.561927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.258 [2024-11-17 02:57:25.562225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.258 [2024-11-17 02:57:25.562511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.258 [2024-11-17 02:57:25.562542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.258 [2024-11-17 02:57:25.562565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.258 [2024-11-17 02:57:25.562586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.258 [2024-11-17 02:57:25.575960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.258 [2024-11-17 02:57:25.576429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.258 [2024-11-17 02:57:25.576469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.258 [2024-11-17 02:57:25.576505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.258 [2024-11-17 02:57:25.576788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.258 [2024-11-17 02:57:25.577073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.258 [2024-11-17 02:57:25.577114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.258 [2024-11-17 02:57:25.577139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.258 [2024-11-17 02:57:25.577161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.258 [2024-11-17 02:57:25.590546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.258 [2024-11-17 02:57:25.591077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.258 [2024-11-17 02:57:25.591142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.258 [2024-11-17 02:57:25.591169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.258 [2024-11-17 02:57:25.591453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.258 [2024-11-17 02:57:25.591737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.258 [2024-11-17 02:57:25.591768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.258 [2024-11-17 02:57:25.591790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.258 [2024-11-17 02:57:25.591812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.258 [2024-11-17 02:57:25.604939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.258 [2024-11-17 02:57:25.605472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.258 [2024-11-17 02:57:25.605531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.258 [2024-11-17 02:57:25.605557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.258 [2024-11-17 02:57:25.605841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.258 [2024-11-17 02:57:25.606138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.258 [2024-11-17 02:57:25.606170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.258 [2024-11-17 02:57:25.606193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.258 [2024-11-17 02:57:25.606214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.258 [2024-11-17 02:57:25.619320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.258 [2024-11-17 02:57:25.619788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.258 [2024-11-17 02:57:25.619828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.258 [2024-11-17 02:57:25.619854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.258 [2024-11-17 02:57:25.620153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.258 [2024-11-17 02:57:25.620445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.258 [2024-11-17 02:57:25.620476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.258 [2024-11-17 02:57:25.620499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.258 [2024-11-17 02:57:25.620521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.258 [2024-11-17 02:57:25.633688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.258 [2024-11-17 02:57:25.634178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.259 [2024-11-17 02:57:25.634220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.259 [2024-11-17 02:57:25.634247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.259 [2024-11-17 02:57:25.634530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.259 [2024-11-17 02:57:25.634814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.259 [2024-11-17 02:57:25.634845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.259 [2024-11-17 02:57:25.634867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.259 [2024-11-17 02:57:25.634889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.259 [2024-11-17 02:57:25.648257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.259 [2024-11-17 02:57:25.648757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.259 [2024-11-17 02:57:25.648801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.259 [2024-11-17 02:57:25.648827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.259 [2024-11-17 02:57:25.649150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.259 [2024-11-17 02:57:25.649447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.259 [2024-11-17 02:57:25.649482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.259 [2024-11-17 02:57:25.649506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.259 [2024-11-17 02:57:25.649530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.259 [2024-11-17 02:57:25.662765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.259 [2024-11-17 02:57:25.663228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.259 [2024-11-17 02:57:25.663270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.259 [2024-11-17 02:57:25.663297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.259 [2024-11-17 02:57:25.663580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.259 [2024-11-17 02:57:25.663864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.259 [2024-11-17 02:57:25.663895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.259 [2024-11-17 02:57:25.663925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.259 [2024-11-17 02:57:25.663948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.259 [2024-11-17 02:57:25.677135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.259 [2024-11-17 02:57:25.677595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.259 [2024-11-17 02:57:25.677636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.259 [2024-11-17 02:57:25.677663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.259 [2024-11-17 02:57:25.677946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.259 [2024-11-17 02:57:25.678249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.259 [2024-11-17 02:57:25.678282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.259 [2024-11-17 02:57:25.678304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.259 [2024-11-17 02:57:25.678326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.259 [2024-11-17 02:57:25.691654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.259 [2024-11-17 02:57:25.692165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.259 [2024-11-17 02:57:25.692207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.259 [2024-11-17 02:57:25.692233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.259 [2024-11-17 02:57:25.692516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.259 [2024-11-17 02:57:25.692802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.259 [2024-11-17 02:57:25.692833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.259 [2024-11-17 02:57:25.692855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.259 [2024-11-17 02:57:25.692877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.259 [2024-11-17 02:57:25.706227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.259 [2024-11-17 02:57:25.706673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.259 [2024-11-17 02:57:25.706713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.259 [2024-11-17 02:57:25.706739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.259 [2024-11-17 02:57:25.707021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.259 [2024-11-17 02:57:25.707318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.259 [2024-11-17 02:57:25.707350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.259 [2024-11-17 02:57:25.707373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.259 [2024-11-17 02:57:25.707394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.519 [2024-11-17 02:57:25.720759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.519 [2024-11-17 02:57:25.721203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-11-17 02:57:25.721245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.519 [2024-11-17 02:57:25.721271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.519 [2024-11-17 02:57:25.721553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.519 [2024-11-17 02:57:25.721837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.519 [2024-11-17 02:57:25.721868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.519 [2024-11-17 02:57:25.721891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.519 [2024-11-17 02:57:25.721913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.519 [2024-11-17 02:57:25.735323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.519 [2024-11-17 02:57:25.735823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-11-17 02:57:25.735864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.519 [2024-11-17 02:57:25.735891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.519 [2024-11-17 02:57:25.736190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.519 [2024-11-17 02:57:25.736475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.519 [2024-11-17 02:57:25.736506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.519 [2024-11-17 02:57:25.736529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.519 [2024-11-17 02:57:25.736552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.519 [2024-11-17 02:57:25.749677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.519 [2024-11-17 02:57:25.750128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-11-17 02:57:25.750172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.519 [2024-11-17 02:57:25.750198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.519 [2024-11-17 02:57:25.750483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.519 [2024-11-17 02:57:25.750765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.519 [2024-11-17 02:57:25.750796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.519 [2024-11-17 02:57:25.750819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.519 [2024-11-17 02:57:25.750840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.519 [2024-11-17 02:57:25.764247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.519 [2024-11-17 02:57:25.764686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-11-17 02:57:25.764734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.519 [2024-11-17 02:57:25.764762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.519 [2024-11-17 02:57:25.765046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.519 [2024-11-17 02:57:25.765342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.519 [2024-11-17 02:57:25.765374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.519 [2024-11-17 02:57:25.765397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.519 [2024-11-17 02:57:25.765419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.519 [2024-11-17 02:57:25.778773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.519 [2024-11-17 02:57:25.779240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-11-17 02:57:25.779281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.519 [2024-11-17 02:57:25.779307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.519 [2024-11-17 02:57:25.779590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.519 [2024-11-17 02:57:25.779876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.519 [2024-11-17 02:57:25.779906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.519 [2024-11-17 02:57:25.779929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.519 [2024-11-17 02:57:25.779951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.519 [2024-11-17 02:57:25.793336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.519 [2024-11-17 02:57:25.793796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-11-17 02:57:25.793837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.519 [2024-11-17 02:57:25.793863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.519 [2024-11-17 02:57:25.794158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.519 [2024-11-17 02:57:25.794443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.519 [2024-11-17 02:57:25.794473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.519 [2024-11-17 02:57:25.794495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.519 [2024-11-17 02:57:25.794517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.519 [2024-11-17 02:57:25.807878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.519 [2024-11-17 02:57:25.808358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-11-17 02:57:25.808411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.519 [2024-11-17 02:57:25.808438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.519 [2024-11-17 02:57:25.808732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.519 [2024-11-17 02:57:25.809018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.519 [2024-11-17 02:57:25.809049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.519 [2024-11-17 02:57:25.809071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.519 [2024-11-17 02:57:25.809093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.519 [2024-11-17 02:57:25.822481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.519 [2024-11-17 02:57:25.823000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-11-17 02:57:25.823041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.519 [2024-11-17 02:57:25.823068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.519 [2024-11-17 02:57:25.823361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.519 [2024-11-17 02:57:25.823647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.519 [2024-11-17 02:57:25.823678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.519 [2024-11-17 02:57:25.823701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.519 [2024-11-17 02:57:25.823722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.519 [2024-11-17 02:57:25.836837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.519 [2024-11-17 02:57:25.837272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-11-17 02:57:25.837315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.519 [2024-11-17 02:57:25.837341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.519 [2024-11-17 02:57:25.837624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.519 [2024-11-17 02:57:25.837909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.519 [2024-11-17 02:57:25.837941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.519 [2024-11-17 02:57:25.837963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.519 [2024-11-17 02:57:25.837985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.519 [2024-11-17 02:57:25.851303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.519 [2024-11-17 02:57:25.851739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-11-17 02:57:25.851780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.519 [2024-11-17 02:57:25.851807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.519 [2024-11-17 02:57:25.852090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.520 [2024-11-17 02:57:25.852393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.520 [2024-11-17 02:57:25.852424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.520 [2024-11-17 02:57:25.852447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.520 [2024-11-17 02:57:25.852469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.520 [2024-11-17 02:57:25.865848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.520 [2024-11-17 02:57:25.866308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-11-17 02:57:25.866350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.520 [2024-11-17 02:57:25.866377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.520 [2024-11-17 02:57:25.866659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.520 [2024-11-17 02:57:25.866943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.520 [2024-11-17 02:57:25.866973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.520 [2024-11-17 02:57:25.866996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.520 [2024-11-17 02:57:25.867018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.520 [2024-11-17 02:57:25.880378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.520 [2024-11-17 02:57:25.880836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-11-17 02:57:25.880878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.520 [2024-11-17 02:57:25.880905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.520 [2024-11-17 02:57:25.881200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.520 [2024-11-17 02:57:25.881484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.520 [2024-11-17 02:57:25.881515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.520 [2024-11-17 02:57:25.881538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.520 [2024-11-17 02:57:25.881560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.520 [2024-11-17 02:57:25.894932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.520 [2024-11-17 02:57:25.895403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-11-17 02:57:25.895444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.520 [2024-11-17 02:57:25.895470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.520 [2024-11-17 02:57:25.895752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.520 [2024-11-17 02:57:25.896038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.520 [2024-11-17 02:57:25.896069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.520 [2024-11-17 02:57:25.896111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.520 [2024-11-17 02:57:25.896138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.520 [2024-11-17 02:57:25.909506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.520 [2024-11-17 02:57:25.909964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-11-17 02:57:25.910007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.520 [2024-11-17 02:57:25.910034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.520 [2024-11-17 02:57:25.910352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.520 [2024-11-17 02:57:25.910651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.520 [2024-11-17 02:57:25.910695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.520 [2024-11-17 02:57:25.910721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.520 [2024-11-17 02:57:25.910743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.520 [2024-11-17 02:57:25.924003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.520 [2024-11-17 02:57:25.924466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-11-17 02:57:25.924509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.520 [2024-11-17 02:57:25.924536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.520 [2024-11-17 02:57:25.924822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.520 [2024-11-17 02:57:25.925122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.520 [2024-11-17 02:57:25.925155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.520 [2024-11-17 02:57:25.925178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.520 [2024-11-17 02:57:25.925200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.520 [2024-11-17 02:57:25.938592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.520 [2024-11-17 02:57:25.939053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-11-17 02:57:25.939094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.520 [2024-11-17 02:57:25.939134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.520 [2024-11-17 02:57:25.939417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.520 [2024-11-17 02:57:25.939701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.520 [2024-11-17 02:57:25.939732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.520 [2024-11-17 02:57:25.939754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.520 [2024-11-17 02:57:25.939776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.520 [2024-11-17 02:57:25.953119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.520 [2024-11-17 02:57:25.953533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-11-17 02:57:25.953575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.520 [2024-11-17 02:57:25.953601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.520 [2024-11-17 02:57:25.953885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.520 [2024-11-17 02:57:25.954187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.520 [2024-11-17 02:57:25.954219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.520 [2024-11-17 02:57:25.954242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.520 [2024-11-17 02:57:25.954263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.520 [2024-11-17 02:57:25.967596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.520 [2024-11-17 02:57:25.968062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-11-17 02:57:25.968112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.520 [2024-11-17 02:57:25.968140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.520 [2024-11-17 02:57:25.968422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.520 [2024-11-17 02:57:25.968706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.520 [2024-11-17 02:57:25.968737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.520 [2024-11-17 02:57:25.968760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.520 [2024-11-17 02:57:25.968781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.780 [2024-11-17 02:57:25.982150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.780 [2024-11-17 02:57:25.982571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.780 [2024-11-17 02:57:25.982612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.780 [2024-11-17 02:57:25.982638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.780 [2024-11-17 02:57:25.982922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.780 [2024-11-17 02:57:25.983221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.780 [2024-11-17 02:57:25.983253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.780 [2024-11-17 02:57:25.983277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.780 [2024-11-17 02:57:25.983298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.780 [2024-11-17 02:57:25.996678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.780 [2024-11-17 02:57:25.997155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.780 [2024-11-17 02:57:25.997203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.780 [2024-11-17 02:57:25.997231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.780 [2024-11-17 02:57:25.997514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.780 [2024-11-17 02:57:25.997800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.780 [2024-11-17 02:57:25.997831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.780 [2024-11-17 02:57:25.997854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.780 [2024-11-17 02:57:25.997876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.780 [2024-11-17 02:57:26.011040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.780 [2024-11-17 02:57:26.011501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.780 [2024-11-17 02:57:26.011542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.780 [2024-11-17 02:57:26.011569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.780 [2024-11-17 02:57:26.011868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.780 [2024-11-17 02:57:26.012164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.780 [2024-11-17 02:57:26.012196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.780 [2024-11-17 02:57:26.012218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.780 [2024-11-17 02:57:26.012240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.780 [2024-11-17 02:57:26.025464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.780 [2024-11-17 02:57:26.025914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.780 [2024-11-17 02:57:26.025955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.780 [2024-11-17 02:57:26.025982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.780 [2024-11-17 02:57:26.026290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.780 [2024-11-17 02:57:26.026577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.780 [2024-11-17 02:57:26.026609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.780 [2024-11-17 02:57:26.026631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.780 [2024-11-17 02:57:26.026652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.780 [2024-11-17 02:57:26.039921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.780 [2024-11-17 02:57:26.040376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.780 [2024-11-17 02:57:26.040418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.780 [2024-11-17 02:57:26.040446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.780 [2024-11-17 02:57:26.040735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.780 [2024-11-17 02:57:26.041020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.780 [2024-11-17 02:57:26.041052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.780 [2024-11-17 02:57:26.041075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.780 [2024-11-17 02:57:26.041108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.780 [2024-11-17 02:57:26.054319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.781 [2024-11-17 02:57:26.054769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.781 [2024-11-17 02:57:26.054810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.781 [2024-11-17 02:57:26.054837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.781 [2024-11-17 02:57:26.055133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.781 [2024-11-17 02:57:26.055418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.781 [2024-11-17 02:57:26.055450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.781 [2024-11-17 02:57:26.055473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.781 [2024-11-17 02:57:26.055495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.781 [2024-11-17 02:57:26.068861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.781 [2024-11-17 02:57:26.069334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.781 [2024-11-17 02:57:26.069376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.781 [2024-11-17 02:57:26.069403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.781 [2024-11-17 02:57:26.069686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.781 [2024-11-17 02:57:26.069972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.781 [2024-11-17 02:57:26.070004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.781 [2024-11-17 02:57:26.070026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.781 [2024-11-17 02:57:26.070048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.781 [2024-11-17 02:57:26.083457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.781 [2024-11-17 02:57:26.083923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.781 [2024-11-17 02:57:26.083964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.781 [2024-11-17 02:57:26.083990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.781 [2024-11-17 02:57:26.084287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.781 [2024-11-17 02:57:26.084571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.781 [2024-11-17 02:57:26.084608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.781 [2024-11-17 02:57:26.084633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.781 [2024-11-17 02:57:26.084655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.781 [2024-11-17 02:57:26.097856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.781 [2024-11-17 02:57:26.098320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.781 [2024-11-17 02:57:26.098361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.781 [2024-11-17 02:57:26.098387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.781 [2024-11-17 02:57:26.098672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.781 [2024-11-17 02:57:26.098960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.781 [2024-11-17 02:57:26.098990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.781 [2024-11-17 02:57:26.099013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.781 [2024-11-17 02:57:26.099034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.781 [2024-11-17 02:57:26.112262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.781 [2024-11-17 02:57:26.112725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.781 [2024-11-17 02:57:26.112767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.781 [2024-11-17 02:57:26.112794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.781 [2024-11-17 02:57:26.113080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.781 [2024-11-17 02:57:26.113381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.781 [2024-11-17 02:57:26.113413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.781 [2024-11-17 02:57:26.113436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.781 [2024-11-17 02:57:26.113458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.781 [2024-11-17 02:57:26.126751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.781 [2024-11-17 02:57:26.127175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.781 [2024-11-17 02:57:26.127217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.781 [2024-11-17 02:57:26.127244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.781 [2024-11-17 02:57:26.127528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.781 [2024-11-17 02:57:26.127819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.781 [2024-11-17 02:57:26.127850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.781 [2024-11-17 02:57:26.127872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.781 [2024-11-17 02:57:26.127900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.781 [2024-11-17 02:57:26.141137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.781 [2024-11-17 02:57:26.141601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.781 [2024-11-17 02:57:26.141641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.781 [2024-11-17 02:57:26.141668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.781 [2024-11-17 02:57:26.141952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.781 [2024-11-17 02:57:26.142250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.781 [2024-11-17 02:57:26.142282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.781 [2024-11-17 02:57:26.142305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.781 [2024-11-17 02:57:26.142326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.781 [2024-11-17 02:57:26.155510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.781 [2024-11-17 02:57:26.155961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.781 [2024-11-17 02:57:26.156003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.781 [2024-11-17 02:57:26.156029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.781 [2024-11-17 02:57:26.156324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.781 [2024-11-17 02:57:26.156611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.781 [2024-11-17 02:57:26.156642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.781 [2024-11-17 02:57:26.156665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.781 [2024-11-17 02:57:26.156687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.781 [2024-11-17 02:57:26.170120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.781 [2024-11-17 02:57:26.170561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.781 [2024-11-17 02:57:26.170604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.781 [2024-11-17 02:57:26.170631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.781 [2024-11-17 02:57:26.170918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.781 [2024-11-17 02:57:26.171245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.781 [2024-11-17 02:57:26.171280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.781 [2024-11-17 02:57:26.171304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.781 [2024-11-17 02:57:26.171326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.781 [2024-11-17 02:57:26.184639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.781 [2024-11-17 02:57:26.185085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.781 [2024-11-17 02:57:26.185139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.781 [2024-11-17 02:57:26.185167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.781 [2024-11-17 02:57:26.185452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.781 [2024-11-17 02:57:26.185739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.781 [2024-11-17 02:57:26.185770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.781 [2024-11-17 02:57:26.185794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.782 [2024-11-17 02:57:26.185815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.782 [2024-11-17 02:57:26.199084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.782 [2024-11-17 02:57:26.199572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.782 [2024-11-17 02:57:26.199614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.782 [2024-11-17 02:57:26.199640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.782 [2024-11-17 02:57:26.199925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.782 [2024-11-17 02:57:26.200226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.782 [2024-11-17 02:57:26.200258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.782 [2024-11-17 02:57:26.200282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.782 [2024-11-17 02:57:26.200304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.782 [2024-11-17 02:57:26.213505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.782 [2024-11-17 02:57:26.213961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.782 [2024-11-17 02:57:26.214002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.782 [2024-11-17 02:57:26.214028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.782 [2024-11-17 02:57:26.214324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.782 [2024-11-17 02:57:26.214611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.782 [2024-11-17 02:57:26.214657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.782 [2024-11-17 02:57:26.214680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.782 [2024-11-17 02:57:26.214703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.782 [2024-11-17 02:57:26.227944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.782 [2024-11-17 02:57:26.228397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.782 [2024-11-17 02:57:26.228440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.782 [2024-11-17 02:57:26.228473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.782 [2024-11-17 02:57:26.228761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.782 [2024-11-17 02:57:26.229049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.782 [2024-11-17 02:57:26.229081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.782 [2024-11-17 02:57:26.229114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.782 [2024-11-17 02:57:26.229139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.042 [2024-11-17 02:57:26.242349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.042 [2024-11-17 02:57:26.242803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.042 [2024-11-17 02:57:26.242845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.042 [2024-11-17 02:57:26.242870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.042 [2024-11-17 02:57:26.243171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.042 [2024-11-17 02:57:26.243457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.042 [2024-11-17 02:57:26.243488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.042 [2024-11-17 02:57:26.243512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.042 [2024-11-17 02:57:26.243534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.042 [2024-11-17 02:57:26.256747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.042 [2024-11-17 02:57:26.257162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.042 [2024-11-17 02:57:26.257203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.042 [2024-11-17 02:57:26.257230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.042 [2024-11-17 02:57:26.257515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.042 [2024-11-17 02:57:26.257801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.042 [2024-11-17 02:57:26.257832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.042 [2024-11-17 02:57:26.257854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.042 [2024-11-17 02:57:26.257877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.042 3252.00 IOPS, 12.70 MiB/s [2024-11-17T01:57:26.502Z] [2024-11-17 02:57:26.273184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.042 [2024-11-17 02:57:26.273677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.042 [2024-11-17 02:57:26.273719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.042 [2024-11-17 02:57:26.273746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.042 [2024-11-17 02:57:26.274037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.042 [2024-11-17 02:57:26.274334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.042 [2024-11-17 02:57:26.274366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.042 [2024-11-17 02:57:26.274388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.042 [2024-11-17 02:57:26.274409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.042 [2024-11-17 02:57:26.287612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.042 [2024-11-17 02:57:26.288043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.042 [2024-11-17 02:57:26.288085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.042 [2024-11-17 02:57:26.288122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.042 [2024-11-17 02:57:26.288409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.042 [2024-11-17 02:57:26.288696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.042 [2024-11-17 02:57:26.288727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.042 [2024-11-17 02:57:26.288750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.042 [2024-11-17 02:57:26.288772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.042 [2024-11-17 02:57:26.301997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.042 [2024-11-17 02:57:26.302462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.042 [2024-11-17 02:57:26.302503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.042 [2024-11-17 02:57:26.302529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.042 [2024-11-17 02:57:26.302814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.042 [2024-11-17 02:57:26.303112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.042 [2024-11-17 02:57:26.303143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.042 [2024-11-17 02:57:26.303166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.043 [2024-11-17 02:57:26.303188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.043 [2024-11-17 02:57:26.316618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.043 [2024-11-17 02:57:26.317089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.043 [2024-11-17 02:57:26.317139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.043 [2024-11-17 02:57:26.317166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.043 [2024-11-17 02:57:26.317450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.043 [2024-11-17 02:57:26.317737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.043 [2024-11-17 02:57:26.317773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.043 [2024-11-17 02:57:26.317798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.043 [2024-11-17 02:57:26.317819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.043 [2024-11-17 02:57:26.331065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.043 [2024-11-17 02:57:26.331512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.043 [2024-11-17 02:57:26.331553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.043 [2024-11-17 02:57:26.331580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.043 [2024-11-17 02:57:26.331865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.043 [2024-11-17 02:57:26.332164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.043 [2024-11-17 02:57:26.332195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.043 [2024-11-17 02:57:26.332218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.043 [2024-11-17 02:57:26.332240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.043 [2024-11-17 02:57:26.345504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.043 [2024-11-17 02:57:26.345970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.043 [2024-11-17 02:57:26.346011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.043 [2024-11-17 02:57:26.346038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.043 [2024-11-17 02:57:26.346334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.043 [2024-11-17 02:57:26.346630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.043 [2024-11-17 02:57:26.346662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.043 [2024-11-17 02:57:26.346686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.043 [2024-11-17 02:57:26.346708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.043 [2024-11-17 02:57:26.359973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.043 [2024-11-17 02:57:26.360436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.043 [2024-11-17 02:57:26.360477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.043 [2024-11-17 02:57:26.360503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.043 [2024-11-17 02:57:26.360788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.043 [2024-11-17 02:57:26.361076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.043 [2024-11-17 02:57:26.361118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.043 [2024-11-17 02:57:26.361143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.043 [2024-11-17 02:57:26.361171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.043 [2024-11-17 02:57:26.374687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.043 [2024-11-17 02:57:26.375169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.043 [2024-11-17 02:57:26.375212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.043 [2024-11-17 02:57:26.375239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.043 [2024-11-17 02:57:26.375526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.043 [2024-11-17 02:57:26.375813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.043 [2024-11-17 02:57:26.375846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.043 [2024-11-17 02:57:26.375869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.043 [2024-11-17 02:57:26.375890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.043 [2024-11-17 02:57:26.389144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.043 [2024-11-17 02:57:26.389625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.043 [2024-11-17 02:57:26.389666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.043 [2024-11-17 02:57:26.389692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.043 [2024-11-17 02:57:26.389975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.043 [2024-11-17 02:57:26.390272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.043 [2024-11-17 02:57:26.390305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.043 [2024-11-17 02:57:26.390328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.043 [2024-11-17 02:57:26.390350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.043 [2024-11-17 02:57:26.403545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.043 [2024-11-17 02:57:26.403979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.043 [2024-11-17 02:57:26.404020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.043 [2024-11-17 02:57:26.404046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.043 [2024-11-17 02:57:26.404338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.043 [2024-11-17 02:57:26.404624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.043 [2024-11-17 02:57:26.404655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.043 [2024-11-17 02:57:26.404678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.043 [2024-11-17 02:57:26.404700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.043 [2024-11-17 02:57:26.418157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.043 [2024-11-17 02:57:26.418590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.043 [2024-11-17 02:57:26.418630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.043 [2024-11-17 02:57:26.418657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.043 [2024-11-17 02:57:26.418941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.043 [2024-11-17 02:57:26.419239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.043 [2024-11-17 02:57:26.419271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.043 [2024-11-17 02:57:26.419294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.043 [2024-11-17 02:57:26.419332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.043 [2024-11-17 02:57:26.432569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.043 [2024-11-17 02:57:26.433075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.043 [2024-11-17 02:57:26.433144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.043 [2024-11-17 02:57:26.433174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.043 [2024-11-17 02:57:26.433477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.043 [2024-11-17 02:57:26.433775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.043 [2024-11-17 02:57:26.433814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.043 [2024-11-17 02:57:26.433848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.043 [2024-11-17 02:57:26.433873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.043 [2024-11-17 02:57:26.447199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.043 [2024-11-17 02:57:26.447659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.043 [2024-11-17 02:57:26.447701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.043 [2024-11-17 02:57:26.447728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.043 [2024-11-17 02:57:26.448012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.043 [2024-11-17 02:57:26.448315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.044 [2024-11-17 02:57:26.448348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.044 [2024-11-17 02:57:26.448372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.044 [2024-11-17 02:57:26.448394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.044 [2024-11-17 02:57:26.461661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.044 [2024-11-17 02:57:26.462137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.044 [2024-11-17 02:57:26.462179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.044 [2024-11-17 02:57:26.462212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.044 [2024-11-17 02:57:26.462498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.044 [2024-11-17 02:57:26.462785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.044 [2024-11-17 02:57:26.462816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.044 [2024-11-17 02:57:26.462840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.044 [2024-11-17 02:57:26.462862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.044 [2024-11-17 02:57:26.476102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.044 [2024-11-17 02:57:26.476530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.044 [2024-11-17 02:57:26.476572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.044 [2024-11-17 02:57:26.476599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.044 [2024-11-17 02:57:26.476884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.044 [2024-11-17 02:57:26.477180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.044 [2024-11-17 02:57:26.477212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.044 [2024-11-17 02:57:26.477235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.044 [2024-11-17 02:57:26.477258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.044 [2024-11-17 02:57:26.490700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.044 [2024-11-17 02:57:26.491125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.044 [2024-11-17 02:57:26.491168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.044 [2024-11-17 02:57:26.491194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.044 [2024-11-17 02:57:26.491478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.044 [2024-11-17 02:57:26.491766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.044 [2024-11-17 02:57:26.491797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.044 [2024-11-17 02:57:26.491819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.044 [2024-11-17 02:57:26.491840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.304 [2024-11-17 02:57:26.505156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.304 [2024-11-17 02:57:26.505619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.304 [2024-11-17 02:57:26.505663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.304 [2024-11-17 02:57:26.505690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.304 [2024-11-17 02:57:26.505977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.304 [2024-11-17 02:57:26.506280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.304 [2024-11-17 02:57:26.506313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.304 [2024-11-17 02:57:26.506336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.304 [2024-11-17 02:57:26.506357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.304 [2024-11-17 02:57:26.519672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.304 [2024-11-17 02:57:26.520135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.304 [2024-11-17 02:57:26.520183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.304 [2024-11-17 02:57:26.520210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.304 [2024-11-17 02:57:26.520498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.304 [2024-11-17 02:57:26.520785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.304 [2024-11-17 02:57:26.520817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.304 [2024-11-17 02:57:26.520841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.304 [2024-11-17 02:57:26.520862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.304 [2024-11-17 02:57:26.534187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.304 [2024-11-17 02:57:26.534643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.304 [2024-11-17 02:57:26.534684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.304 [2024-11-17 02:57:26.534711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.304 [2024-11-17 02:57:26.534996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.305 [2024-11-17 02:57:26.535294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.305 [2024-11-17 02:57:26.535326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.305 [2024-11-17 02:57:26.535349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.305 [2024-11-17 02:57:26.535371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.305 [2024-11-17 02:57:26.548563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.305 [2024-11-17 02:57:26.549004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.305 [2024-11-17 02:57:26.549046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.305 [2024-11-17 02:57:26.549072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.305 [2024-11-17 02:57:26.549368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.305 [2024-11-17 02:57:26.549653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.305 [2024-11-17 02:57:26.549684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.305 [2024-11-17 02:57:26.549714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.305 [2024-11-17 02:57:26.549737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.305 [2024-11-17 02:57:26.563162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.305 [2024-11-17 02:57:26.563618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.305 [2024-11-17 02:57:26.563659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.305 [2024-11-17 02:57:26.563685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.305 [2024-11-17 02:57:26.563970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.305 [2024-11-17 02:57:26.564270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.305 [2024-11-17 02:57:26.564303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.305 [2024-11-17 02:57:26.564325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.305 [2024-11-17 02:57:26.564348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.305 [2024-11-17 02:57:26.577739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.305 [2024-11-17 02:57:26.578181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.305 [2024-11-17 02:57:26.578223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.305 [2024-11-17 02:57:26.578249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.305 [2024-11-17 02:57:26.578535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.305 [2024-11-17 02:57:26.578820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.305 [2024-11-17 02:57:26.578851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.305 [2024-11-17 02:57:26.578874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.305 [2024-11-17 02:57:26.578896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.305 [2024-11-17 02:57:26.592317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.305 [2024-11-17 02:57:26.592767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.305 [2024-11-17 02:57:26.592808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.305 [2024-11-17 02:57:26.592834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.305 [2024-11-17 02:57:26.593132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.305 [2024-11-17 02:57:26.593419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.305 [2024-11-17 02:57:26.593451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.305 [2024-11-17 02:57:26.593473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.305 [2024-11-17 02:57:26.593495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.305 [2024-11-17 02:57:26.606812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.305 [2024-11-17 02:57:26.607254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.305 [2024-11-17 02:57:26.607297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.305 [2024-11-17 02:57:26.607324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.305 [2024-11-17 02:57:26.607610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.305 [2024-11-17 02:57:26.607900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.305 [2024-11-17 02:57:26.607931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.305 [2024-11-17 02:57:26.607954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.305 [2024-11-17 02:57:26.607975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.305 [2024-11-17 02:57:26.621255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.305 [2024-11-17 02:57:26.621723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.305 [2024-11-17 02:57:26.621765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.305 [2024-11-17 02:57:26.621792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.305 [2024-11-17 02:57:26.622077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.305 [2024-11-17 02:57:26.622377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.305 [2024-11-17 02:57:26.622408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.305 [2024-11-17 02:57:26.622431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.305 [2024-11-17 02:57:26.622452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.305 [2024-11-17 02:57:26.635768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.305 [2024-11-17 02:57:26.636225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.305 [2024-11-17 02:57:26.636266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.305 [2024-11-17 02:57:26.636292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.305 [2024-11-17 02:57:26.636576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.305 [2024-11-17 02:57:26.636862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.305 [2024-11-17 02:57:26.636893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.305 [2024-11-17 02:57:26.636916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.305 [2024-11-17 02:57:26.636938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.305 [2024-11-17 02:57:26.650369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.305 [2024-11-17 02:57:26.650820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.305 [2024-11-17 02:57:26.650867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.305 [2024-11-17 02:57:26.650895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.305 [2024-11-17 02:57:26.651192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.305 [2024-11-17 02:57:26.651477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.305 [2024-11-17 02:57:26.651508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.305 [2024-11-17 02:57:26.651530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.305 [2024-11-17 02:57:26.651553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.305 [2024-11-17 02:57:26.664959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.305 [2024-11-17 02:57:26.665411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.305 [2024-11-17 02:57:26.665453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.305 [2024-11-17 02:57:26.665479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.305 [2024-11-17 02:57:26.665763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.305 [2024-11-17 02:57:26.666048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.305 [2024-11-17 02:57:26.666079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.305 [2024-11-17 02:57:26.666113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.305 [2024-11-17 02:57:26.666139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.305 [2024-11-17 02:57:26.679338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.305 [2024-11-17 02:57:26.679802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.305 [2024-11-17 02:57:26.679843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.306 [2024-11-17 02:57:26.679869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.306 [2024-11-17 02:57:26.680167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.306 [2024-11-17 02:57:26.680452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.306 [2024-11-17 02:57:26.680483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.306 [2024-11-17 02:57:26.680505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.306 [2024-11-17 02:57:26.680527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.306 [2024-11-17 02:57:26.693940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.306 [2024-11-17 02:57:26.694462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.306 [2024-11-17 02:57:26.694506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.306 [2024-11-17 02:57:26.694534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.306 [2024-11-17 02:57:26.694841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.306 [2024-11-17 02:57:26.695158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.306 [2024-11-17 02:57:26.695203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.306 [2024-11-17 02:57:26.695231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.306 [2024-11-17 02:57:26.695259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.306 [2024-11-17 02:57:26.708350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.306 [2024-11-17 02:57:26.708785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.306 [2024-11-17 02:57:26.708827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.306 [2024-11-17 02:57:26.708854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.306 [2024-11-17 02:57:26.709154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.306 [2024-11-17 02:57:26.709441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.306 [2024-11-17 02:57:26.709472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.306 [2024-11-17 02:57:26.709495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.306 [2024-11-17 02:57:26.709518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.306 [2024-11-17 02:57:26.722774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.306 [2024-11-17 02:57:26.723215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.306 [2024-11-17 02:57:26.723258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.306 [2024-11-17 02:57:26.723286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.306 [2024-11-17 02:57:26.723571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.306 [2024-11-17 02:57:26.723859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.306 [2024-11-17 02:57:26.723890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.306 [2024-11-17 02:57:26.723912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.306 [2024-11-17 02:57:26.723935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.306 [2024-11-17 02:57:26.737207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.306 [2024-11-17 02:57:26.737653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.306 [2024-11-17 02:57:26.737695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.306 [2024-11-17 02:57:26.737722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.306 [2024-11-17 02:57:26.738008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.306 [2024-11-17 02:57:26.738322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.306 [2024-11-17 02:57:26.738354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.306 [2024-11-17 02:57:26.738388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.306 [2024-11-17 02:57:26.738410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.306 [2024-11-17 02:57:26.751683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.306 [2024-11-17 02:57:26.752156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.306 [2024-11-17 02:57:26.752198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.306 [2024-11-17 02:57:26.752223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.306 [2024-11-17 02:57:26.752509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.306 [2024-11-17 02:57:26.752796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.306 [2024-11-17 02:57:26.752827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.306 [2024-11-17 02:57:26.752850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.306 [2024-11-17 02:57:26.752872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.566 [2024-11-17 02:57:26.766116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.566 [2024-11-17 02:57:26.766557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.566 [2024-11-17 02:57:26.766598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.566 [2024-11-17 02:57:26.766625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.566 [2024-11-17 02:57:26.766909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.566 [2024-11-17 02:57:26.767216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.566 [2024-11-17 02:57:26.767254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.566 [2024-11-17 02:57:26.767277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.566 [2024-11-17 02:57:26.767299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.566 [2024-11-17 02:57:26.780533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.566 [2024-11-17 02:57:26.781022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.566 [2024-11-17 02:57:26.781064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.566 [2024-11-17 02:57:26.781091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.566 [2024-11-17 02:57:26.781385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.566 [2024-11-17 02:57:26.781671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.566 [2024-11-17 02:57:26.781703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.566 [2024-11-17 02:57:26.781732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.566 [2024-11-17 02:57:26.781756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.566 [2024-11-17 02:57:26.794986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.566 [2024-11-17 02:57:26.795452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.566 [2024-11-17 02:57:26.795496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.566 [2024-11-17 02:57:26.795523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.566 [2024-11-17 02:57:26.795811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.566 [2024-11-17 02:57:26.796111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.566 [2024-11-17 02:57:26.796143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.566 [2024-11-17 02:57:26.796166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.566 [2024-11-17 02:57:26.796189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.566 [2024-11-17 02:57:26.809416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.566 [2024-11-17 02:57:26.809882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.566 [2024-11-17 02:57:26.809932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.566 [2024-11-17 02:57:26.809959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.566 [2024-11-17 02:57:26.810258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.566 [2024-11-17 02:57:26.810545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.566 [2024-11-17 02:57:26.810576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.566 [2024-11-17 02:57:26.810599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.566 [2024-11-17 02:57:26.810621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.566 [2024-11-17 02:57:26.823845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.566 [2024-11-17 02:57:26.824298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.566 [2024-11-17 02:57:26.824341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.566 [2024-11-17 02:57:26.824367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.566 [2024-11-17 02:57:26.824652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.566 [2024-11-17 02:57:26.824939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.566 [2024-11-17 02:57:26.824970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.566 [2024-11-17 02:57:26.824993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.567 [2024-11-17 02:57:26.825015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.567 [2024-11-17 02:57:26.838243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.567 [2024-11-17 02:57:26.838697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.567 [2024-11-17 02:57:26.838738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.567 [2024-11-17 02:57:26.838782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.567 [2024-11-17 02:57:26.839067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.567 [2024-11-17 02:57:26.839364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.567 [2024-11-17 02:57:26.839396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.567 [2024-11-17 02:57:26.839419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.567 [2024-11-17 02:57:26.839440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.567 [2024-11-17 02:57:26.852713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.567 [2024-11-17 02:57:26.853183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.567 [2024-11-17 02:57:26.853225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.567 [2024-11-17 02:57:26.853252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.567 [2024-11-17 02:57:26.853539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.567 [2024-11-17 02:57:26.853826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.567 [2024-11-17 02:57:26.853857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.567 [2024-11-17 02:57:26.853880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.567 [2024-11-17 02:57:26.853902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.567 [2024-11-17 02:57:26.867197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.567 [2024-11-17 02:57:26.867656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.567 [2024-11-17 02:57:26.867697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.567 [2024-11-17 02:57:26.867723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.567 [2024-11-17 02:57:26.868007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.567 [2024-11-17 02:57:26.868308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.567 [2024-11-17 02:57:26.868340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.567 [2024-11-17 02:57:26.868363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.567 [2024-11-17 02:57:26.868386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.567 [2024-11-17 02:57:26.881640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.567 [2024-11-17 02:57:26.882106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.567 [2024-11-17 02:57:26.882182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.567 [2024-11-17 02:57:26.882211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.567 [2024-11-17 02:57:26.882497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.567 [2024-11-17 02:57:26.882785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.567 [2024-11-17 02:57:26.882816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.567 [2024-11-17 02:57:26.882838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.567 [2024-11-17 02:57:26.882861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.567 [2024-11-17 02:57:26.896064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.567 [2024-11-17 02:57:26.896543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.567 [2024-11-17 02:57:26.896585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.567 [2024-11-17 02:57:26.896612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.567 [2024-11-17 02:57:26.896895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.567 [2024-11-17 02:57:26.897194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.567 [2024-11-17 02:57:26.897226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.567 [2024-11-17 02:57:26.897249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.567 [2024-11-17 02:57:26.897271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.567 [2024-11-17 02:57:26.910673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.567 [2024-11-17 02:57:26.911106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.567 [2024-11-17 02:57:26.911147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.567 [2024-11-17 02:57:26.911173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.567 [2024-11-17 02:57:26.911459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.567 [2024-11-17 02:57:26.911745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.567 [2024-11-17 02:57:26.911776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.567 [2024-11-17 02:57:26.911799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.567 [2024-11-17 02:57:26.911821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.567 [2024-11-17 02:57:26.925091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.567 [2024-11-17 02:57:26.925545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.567 [2024-11-17 02:57:26.925586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.567 [2024-11-17 02:57:26.925613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.567 [2024-11-17 02:57:26.925909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.567 [2024-11-17 02:57:26.926239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.567 [2024-11-17 02:57:26.926271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.567 [2024-11-17 02:57:26.926294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.567 [2024-11-17 02:57:26.926316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.567 [2024-11-17 02:57:26.939573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.567 [2024-11-17 02:57:26.939979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.567 [2024-11-17 02:57:26.940021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.567 [2024-11-17 02:57:26.940048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.567 [2024-11-17 02:57:26.940347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.567 [2024-11-17 02:57:26.940633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.567 [2024-11-17 02:57:26.940663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.567 [2024-11-17 02:57:26.940686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.568 [2024-11-17 02:57:26.940707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.568 [2024-11-17 02:57:26.954150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.568 [2024-11-17 02:57:26.954649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.568 [2024-11-17 02:57:26.954691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.568 [2024-11-17 02:57:26.954718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.568 [2024-11-17 02:57:26.955019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.568 [2024-11-17 02:57:26.955323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.568 [2024-11-17 02:57:26.955364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.568 [2024-11-17 02:57:26.955388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.568 [2024-11-17 02:57:26.955420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.568 [2024-11-17 02:57:26.968712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.568 [2024-11-17 02:57:26.969165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.568 [2024-11-17 02:57:26.969209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.568 [2024-11-17 02:57:26.969236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.568 [2024-11-17 02:57:26.969520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.568 [2024-11-17 02:57:26.969809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.568 [2024-11-17 02:57:26.969846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.568 [2024-11-17 02:57:26.969870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.568 [2024-11-17 02:57:26.969892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.568 [2024-11-17 02:57:26.983163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.568 [2024-11-17 02:57:26.983590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.568 [2024-11-17 02:57:26.983632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.568 [2024-11-17 02:57:26.983658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.568 [2024-11-17 02:57:26.983943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.568 [2024-11-17 02:57:26.984239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.568 [2024-11-17 02:57:26.984271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.568 [2024-11-17 02:57:26.984295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.568 [2024-11-17 02:57:26.984317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.568 [2024-11-17 02:57:26.997575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.568 [2024-11-17 02:57:26.998017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.568 [2024-11-17 02:57:26.998057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.568 [2024-11-17 02:57:26.998084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.568 [2024-11-17 02:57:26.998382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.568 [2024-11-17 02:57:26.998670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.568 [2024-11-17 02:57:26.998701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.568 [2024-11-17 02:57:26.998724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.568 [2024-11-17 02:57:26.998746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.568 [2024-11-17 02:57:27.011987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.568 [2024-11-17 02:57:27.012464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.568 [2024-11-17 02:57:27.012506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.568 [2024-11-17 02:57:27.012532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.568 [2024-11-17 02:57:27.012816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.568 [2024-11-17 02:57:27.013114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.568 [2024-11-17 02:57:27.013152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.568 [2024-11-17 02:57:27.013175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.568 [2024-11-17 02:57:27.013203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.828 [2024-11-17 02:57:27.026449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.828 [2024-11-17 02:57:27.026889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.828 [2024-11-17 02:57:27.026931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.828 [2024-11-17 02:57:27.026957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.828 [2024-11-17 02:57:27.027259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.828 [2024-11-17 02:57:27.027547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.828 [2024-11-17 02:57:27.027578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.828 [2024-11-17 02:57:27.027601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.828 [2024-11-17 02:57:27.027624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.828 [2024-11-17 02:57:27.040892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.828 [2024-11-17 02:57:27.041378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.828 [2024-11-17 02:57:27.041420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.828 [2024-11-17 02:57:27.041446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.828 [2024-11-17 02:57:27.041729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.828 [2024-11-17 02:57:27.042032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.828 [2024-11-17 02:57:27.042063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.828 [2024-11-17 02:57:27.042086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.828 [2024-11-17 02:57:27.042122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.829 [2024-11-17 02:57:27.055326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.829 [2024-11-17 02:57:27.055751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.829 [2024-11-17 02:57:27.055792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.829 [2024-11-17 02:57:27.055818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.829 [2024-11-17 02:57:27.056113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.829 [2024-11-17 02:57:27.056400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.829 [2024-11-17 02:57:27.056432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.829 [2024-11-17 02:57:27.056455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.829 [2024-11-17 02:57:27.056477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.829 [2024-11-17 02:57:27.069915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.829 [2024-11-17 02:57:27.070370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.829 [2024-11-17 02:57:27.070412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.829 [2024-11-17 02:57:27.070439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.829 [2024-11-17 02:57:27.070724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.829 [2024-11-17 02:57:27.071012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.829 [2024-11-17 02:57:27.071043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.829 [2024-11-17 02:57:27.071065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.829 [2024-11-17 02:57:27.071087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.829 [2024-11-17 02:57:27.084503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.829 [2024-11-17 02:57:27.084973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.829 [2024-11-17 02:57:27.085014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.829 [2024-11-17 02:57:27.085040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.829 [2024-11-17 02:57:27.085336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.829 [2024-11-17 02:57:27.085623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.829 [2024-11-17 02:57:27.085654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.829 [2024-11-17 02:57:27.085677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.829 [2024-11-17 02:57:27.085699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.829 [2024-11-17 02:57:27.098908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.829 [2024-11-17 02:57:27.099348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.829 [2024-11-17 02:57:27.099389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.829 [2024-11-17 02:57:27.099416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.829 [2024-11-17 02:57:27.099703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.829 [2024-11-17 02:57:27.099992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.829 [2024-11-17 02:57:27.100023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.829 [2024-11-17 02:57:27.100046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.829 [2024-11-17 02:57:27.100068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.829 [2024-11-17 02:57:27.113375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.829 [2024-11-17 02:57:27.113839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.829 [2024-11-17 02:57:27.113880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.829 [2024-11-17 02:57:27.113913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.829 [2024-11-17 02:57:27.114211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.829 [2024-11-17 02:57:27.114496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.829 [2024-11-17 02:57:27.114528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.829 [2024-11-17 02:57:27.114550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.829 [2024-11-17 02:57:27.114572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.829 [2024-11-17 02:57:27.127865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.829 [2024-11-17 02:57:27.128396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.829 [2024-11-17 02:57:27.128438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.829 [2024-11-17 02:57:27.128465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.829 [2024-11-17 02:57:27.128751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.829 [2024-11-17 02:57:27.129039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.829 [2024-11-17 02:57:27.129070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.829 [2024-11-17 02:57:27.129093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.829 [2024-11-17 02:57:27.129128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.829 [2024-11-17 02:57:27.142411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.829 [2024-11-17 02:57:27.142863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.829 [2024-11-17 02:57:27.142904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.829 [2024-11-17 02:57:27.142931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.829 [2024-11-17 02:57:27.143227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.829 [2024-11-17 02:57:27.143513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.829 [2024-11-17 02:57:27.143544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.829 [2024-11-17 02:57:27.143567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.829 [2024-11-17 02:57:27.143591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.829 [2024-11-17 02:57:27.156778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.829 [2024-11-17 02:57:27.157232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.829 [2024-11-17 02:57:27.157274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.829 [2024-11-17 02:57:27.157300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.829 [2024-11-17 02:57:27.157585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.829 [2024-11-17 02:57:27.157877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.829 [2024-11-17 02:57:27.157908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.829 [2024-11-17 02:57:27.157931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.829 [2024-11-17 02:57:27.157953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.829 [2024-11-17 02:57:27.171381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.829 [2024-11-17 02:57:27.171811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.829 [2024-11-17 02:57:27.171852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.829 [2024-11-17 02:57:27.171879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.829 [2024-11-17 02:57:27.172176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.829 [2024-11-17 02:57:27.172473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.829 [2024-11-17 02:57:27.172504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.829 [2024-11-17 02:57:27.172526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.829 [2024-11-17 02:57:27.172548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.829 [2024-11-17 02:57:27.185780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.829 [2024-11-17 02:57:27.186217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.829 [2024-11-17 02:57:27.186258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.829 [2024-11-17 02:57:27.186285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.829 [2024-11-17 02:57:27.186569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.829 [2024-11-17 02:57:27.186857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.829 [2024-11-17 02:57:27.186888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.830 [2024-11-17 02:57:27.186910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.830 [2024-11-17 02:57:27.186933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.830 [2024-11-17 02:57:27.200362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.830 [2024-11-17 02:57:27.200834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.830 [2024-11-17 02:57:27.200875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.830 [2024-11-17 02:57:27.200901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.830 [2024-11-17 02:57:27.201198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.830 [2024-11-17 02:57:27.201485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.830 [2024-11-17 02:57:27.201516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.830 [2024-11-17 02:57:27.201546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.830 [2024-11-17 02:57:27.201569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.830 [2024-11-17 02:57:27.214756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.830 [2024-11-17 02:57:27.215231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.830 [2024-11-17 02:57:27.215274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.830 [2024-11-17 02:57:27.215301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.830 [2024-11-17 02:57:27.215600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.830 [2024-11-17 02:57:27.215901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.830 [2024-11-17 02:57:27.215935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.830 [2024-11-17 02:57:27.215958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.830 [2024-11-17 02:57:27.215980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.830 [2024-11-17 02:57:27.229353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.830 [2024-11-17 02:57:27.229784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.830 [2024-11-17 02:57:27.229828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.830 [2024-11-17 02:57:27.229855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.830 [2024-11-17 02:57:27.230157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.830 [2024-11-17 02:57:27.230444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.830 [2024-11-17 02:57:27.230476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.830 [2024-11-17 02:57:27.230498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.830 [2024-11-17 02:57:27.230520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.830 [2024-11-17 02:57:27.243786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.830 [2024-11-17 02:57:27.244267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.830 [2024-11-17 02:57:27.244310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.830 [2024-11-17 02:57:27.244337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.830 [2024-11-17 02:57:27.244621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.830 [2024-11-17 02:57:27.244908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.830 [2024-11-17 02:57:27.244939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.830 [2024-11-17 02:57:27.244978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.830 [2024-11-17 02:57:27.245006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.830 [2024-11-17 02:57:27.258235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.830 [2024-11-17 02:57:27.258667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.830 [2024-11-17 02:57:27.258708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.830 [2024-11-17 02:57:27.258735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.830 [2024-11-17 02:57:27.259024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.830 [2024-11-17 02:57:27.259328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.830 [2024-11-17 02:57:27.259361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.830 [2024-11-17 02:57:27.259384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.830 [2024-11-17 02:57:27.259407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:18.830 2601.60 IOPS, 10.16 MiB/s [2024-11-17T01:57:27.290Z] [2024-11-17 02:57:27.274438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:18.830 [2024-11-17 02:57:27.274912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.830 [2024-11-17 02:57:27.274954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:18.830 [2024-11-17 02:57:27.274981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:18.830 [2024-11-17 02:57:27.275279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:18.830 [2024-11-17 02:57:27.275567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:18.830 [2024-11-17 02:57:27.275598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:18.830 [2024-11-17 02:57:27.275621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:18.830 [2024-11-17 02:57:27.275643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.090 [2024-11-17 02:57:27.288816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.090 [2024-11-17 02:57:27.289282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.090 [2024-11-17 02:57:27.289323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.090 [2024-11-17 02:57:27.289349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.090 [2024-11-17 02:57:27.289632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.090 [2024-11-17 02:57:27.289919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.090 [2024-11-17 02:57:27.289949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.090 [2024-11-17 02:57:27.289972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.090 [2024-11-17 02:57:27.289994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.090 [2024-11-17 02:57:27.303202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.090 [2024-11-17 02:57:27.303668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.090 [2024-11-17 02:57:27.303710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.090 [2024-11-17 02:57:27.303737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.091 [2024-11-17 02:57:27.304020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.091 [2024-11-17 02:57:27.304319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.091 [2024-11-17 02:57:27.304351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.091 [2024-11-17 02:57:27.304374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.091 [2024-11-17 02:57:27.304396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.091 [2024-11-17 02:57:27.317603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.091 [2024-11-17 02:57:27.318058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.091 [2024-11-17 02:57:27.318108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.091 [2024-11-17 02:57:27.318137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.091 [2024-11-17 02:57:27.318420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.091 [2024-11-17 02:57:27.318707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.091 [2024-11-17 02:57:27.318738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.091 [2024-11-17 02:57:27.318761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.091 [2024-11-17 02:57:27.318783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3137515 Killed "${NVMF_APP[@]}" "$@" 00:37:19.091 02:57:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:19.091 02:57:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:19.091 02:57:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:19.091 02:57:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:19.091 02:57:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:19.091 02:57:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3138611 00:37:19.091 02:57:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:19.091 02:57:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3138611 00:37:19.091 02:57:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3138611 ']' 00:37:19.091 02:57:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:19.091 02:57:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:19.091 02:57:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:19.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:19.091 02:57:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:19.091 [2024-11-17 02:57:27.332058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.091 02:57:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:19.091 [2024-11-17 02:57:27.332504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.091 [2024-11-17 02:57:27.332547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.091 [2024-11-17 02:57:27.332573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.091 [2024-11-17 02:57:27.332857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.091 [2024-11-17 02:57:27.333162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.091 [2024-11-17 02:57:27.333196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.091 [2024-11-17 02:57:27.333220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.091 [2024-11-17 02:57:27.333243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.091 [2024-11-17 02:57:27.346461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.091 [2024-11-17 02:57:27.346924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.091 [2024-11-17 02:57:27.346965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.091 [2024-11-17 02:57:27.346992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.091 [2024-11-17 02:57:27.347289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.091 [2024-11-17 02:57:27.347576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.091 [2024-11-17 02:57:27.347607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.091 [2024-11-17 02:57:27.347630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.091 [2024-11-17 02:57:27.347652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.091 [2024-11-17 02:57:27.360878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.091 [2024-11-17 02:57:27.361353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.091 [2024-11-17 02:57:27.361403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.091 [2024-11-17 02:57:27.361430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.091 [2024-11-17 02:57:27.361715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.091 [2024-11-17 02:57:27.362001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.091 [2024-11-17 02:57:27.362032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.091 [2024-11-17 02:57:27.362055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.091 [2024-11-17 02:57:27.362088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.091 [2024-11-17 02:57:27.375353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.091 [2024-11-17 02:57:27.375810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.091 [2024-11-17 02:57:27.375856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.091 [2024-11-17 02:57:27.375883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.091 [2024-11-17 02:57:27.376179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.091 [2024-11-17 02:57:27.376465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.091 [2024-11-17 02:57:27.376497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.091 [2024-11-17 02:57:27.376519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.091 [2024-11-17 02:57:27.376541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.091 [2024-11-17 02:57:27.389987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.091 [2024-11-17 02:57:27.390470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.091 [2024-11-17 02:57:27.390512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.091 [2024-11-17 02:57:27.390538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.091 [2024-11-17 02:57:27.390820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.091 [2024-11-17 02:57:27.391117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.091 [2024-11-17 02:57:27.391149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.091 [2024-11-17 02:57:27.391172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.091 [2024-11-17 02:57:27.391194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.091 [2024-11-17 02:57:27.404404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.091 [2024-11-17 02:57:27.404852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.091 [2024-11-17 02:57:27.404893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.091 [2024-11-17 02:57:27.404919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.091 [2024-11-17 02:57:27.405213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.091 [2024-11-17 02:57:27.405498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.091 [2024-11-17 02:57:27.405529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.091 [2024-11-17 02:57:27.405552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.091 [2024-11-17 02:57:27.405574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.091 [2024-11-17 02:57:27.418972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.091 [2024-11-17 02:57:27.419413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.091 [2024-11-17 02:57:27.419453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.091 [2024-11-17 02:57:27.419479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.091 [2024-11-17 02:57:27.419767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.091 [2024-11-17 02:57:27.420055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.091 [2024-11-17 02:57:27.420086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.092 [2024-11-17 02:57:27.420120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.092 [2024-11-17 02:57:27.420144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.092 [2024-11-17 02:57:27.421888] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:37:19.092 [2024-11-17 02:57:27.422019] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:19.092 [2024-11-17 02:57:27.433365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.092 [2024-11-17 02:57:27.433830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.092 [2024-11-17 02:57:27.433871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.092 [2024-11-17 02:57:27.433897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.092 [2024-11-17 02:57:27.434193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.092 [2024-11-17 02:57:27.434479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.092 [2024-11-17 02:57:27.434511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.092 [2024-11-17 02:57:27.434534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.092 [2024-11-17 02:57:27.434556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.092 [2024-11-17 02:57:27.447973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.092 [2024-11-17 02:57:27.448434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.092 [2024-11-17 02:57:27.448476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.092 [2024-11-17 02:57:27.448501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.092 [2024-11-17 02:57:27.448785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.092 [2024-11-17 02:57:27.449072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.092 [2024-11-17 02:57:27.449113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.092 [2024-11-17 02:57:27.449138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.092 [2024-11-17 02:57:27.449160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.092 [2024-11-17 02:57:27.462397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.092 [2024-11-17 02:57:27.462825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.092 [2024-11-17 02:57:27.462867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.092 [2024-11-17 02:57:27.462900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.092 [2024-11-17 02:57:27.463197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.092 [2024-11-17 02:57:27.463482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.092 [2024-11-17 02:57:27.463513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.092 [2024-11-17 02:57:27.463536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.092 [2024-11-17 02:57:27.463557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.092 [2024-11-17 02:57:27.476785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.092 [2024-11-17 02:57:27.477304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.092 [2024-11-17 02:57:27.477347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.092 [2024-11-17 02:57:27.477373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.092 [2024-11-17 02:57:27.477675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.092 [2024-11-17 02:57:27.477971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.092 [2024-11-17 02:57:27.478004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.092 [2024-11-17 02:57:27.478043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.092 [2024-11-17 02:57:27.478071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.092 [2024-11-17 02:57:27.491453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.092 [2024-11-17 02:57:27.491896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.092 [2024-11-17 02:57:27.491938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.092 [2024-11-17 02:57:27.491965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.092 [2024-11-17 02:57:27.492265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.092 [2024-11-17 02:57:27.492552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.092 [2024-11-17 02:57:27.492583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.092 [2024-11-17 02:57:27.492606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.092 [2024-11-17 02:57:27.492629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.092 [2024-11-17 02:57:27.506020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.092 [2024-11-17 02:57:27.506493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.092 [2024-11-17 02:57:27.506536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.092 [2024-11-17 02:57:27.506562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.092 [2024-11-17 02:57:27.506848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.092 [2024-11-17 02:57:27.507156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.092 [2024-11-17 02:57:27.507189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.092 [2024-11-17 02:57:27.507211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.092 [2024-11-17 02:57:27.507233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.092 [2024-11-17 02:57:27.520556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.092 [2024-11-17 02:57:27.521027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.092 [2024-11-17 02:57:27.521074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.092 [2024-11-17 02:57:27.521122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.092 [2024-11-17 02:57:27.521415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.092 [2024-11-17 02:57:27.521702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.092 [2024-11-17 02:57:27.521733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.092 [2024-11-17 02:57:27.521756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.092 [2024-11-17 02:57:27.521778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.092 [2024-11-17 02:57:27.535011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.092 [2024-11-17 02:57:27.535460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.092 [2024-11-17 02:57:27.535502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.092 [2024-11-17 02:57:27.535529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.092 [2024-11-17 02:57:27.535828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.092 [2024-11-17 02:57:27.536127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.092 [2024-11-17 02:57:27.536159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.092 [2024-11-17 02:57:27.536182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.092 [2024-11-17 02:57:27.536205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.352 [2024-11-17 02:57:27.549696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.352 [2024-11-17 02:57:27.550168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-11-17 02:57:27.550210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.352 [2024-11-17 02:57:27.550237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.352 [2024-11-17 02:57:27.550527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.352 [2024-11-17 02:57:27.550823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.352 [2024-11-17 02:57:27.550855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.352 [2024-11-17 02:57:27.550885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.352 [2024-11-17 02:57:27.550916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.352 [2024-11-17 02:57:27.564228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.352 [2024-11-17 02:57:27.564676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-11-17 02:57:27.564718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.352 [2024-11-17 02:57:27.564745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.352 [2024-11-17 02:57:27.565031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.352 [2024-11-17 02:57:27.565330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.352 [2024-11-17 02:57:27.565362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.352 [2024-11-17 02:57:27.565386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.352 [2024-11-17 02:57:27.565418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.352 [2024-11-17 02:57:27.578836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.352 [2024-11-17 02:57:27.579302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-11-17 02:57:27.579344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.352 [2024-11-17 02:57:27.579370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.352 [2024-11-17 02:57:27.579676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.352 [2024-11-17 02:57:27.579984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.352 [2024-11-17 02:57:27.580015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.352 [2024-11-17 02:57:27.580038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.352 [2024-11-17 02:57:27.580059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.352 [2024-11-17 02:57:27.586451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:19.352 [2024-11-17 02:57:27.593477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.352 [2024-11-17 02:57:27.593936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-11-17 02:57:27.593979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.352 [2024-11-17 02:57:27.594006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.352 [2024-11-17 02:57:27.594304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.352 [2024-11-17 02:57:27.594599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.352 [2024-11-17 02:57:27.594631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.353 [2024-11-17 02:57:27.594654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.353 [2024-11-17 02:57:27.594682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.353 [2024-11-17 02:57:27.608160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.353 [2024-11-17 02:57:27.608841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-11-17 02:57:27.608891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.353 [2024-11-17 02:57:27.608923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.353 [2024-11-17 02:57:27.609248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.353 [2024-11-17 02:57:27.609563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.353 [2024-11-17 02:57:27.609607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.353 [2024-11-17 02:57:27.609634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.353 [2024-11-17 02:57:27.609661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.353 [2024-11-17 02:57:27.622924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.353 [2024-11-17 02:57:27.623385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-11-17 02:57:27.623431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.353 [2024-11-17 02:57:27.623458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.353 [2024-11-17 02:57:27.623759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.353 [2024-11-17 02:57:27.624054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.353 [2024-11-17 02:57:27.624085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.353 [2024-11-17 02:57:27.624119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.353 [2024-11-17 02:57:27.624143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.353 [2024-11-17 02:57:27.637635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.353 [2024-11-17 02:57:27.638079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-11-17 02:57:27.638129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.353 [2024-11-17 02:57:27.638167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.353 [2024-11-17 02:57:27.638459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.353 [2024-11-17 02:57:27.638754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.353 [2024-11-17 02:57:27.638785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.353 [2024-11-17 02:57:27.638808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.353 [2024-11-17 02:57:27.638830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.353 [2024-11-17 02:57:27.652089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.353 [2024-11-17 02:57:27.652592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-11-17 02:57:27.652633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.353 [2024-11-17 02:57:27.652659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.353 [2024-11-17 02:57:27.652946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.353 [2024-11-17 02:57:27.653249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.353 [2024-11-17 02:57:27.653281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.353 [2024-11-17 02:57:27.653304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.353 [2024-11-17 02:57:27.653325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.353 [2024-11-17 02:57:27.666595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.353 [2024-11-17 02:57:27.667071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-11-17 02:57:27.667135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.353 [2024-11-17 02:57:27.667162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.353 [2024-11-17 02:57:27.667448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.353 [2024-11-17 02:57:27.667735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.353 [2024-11-17 02:57:27.667766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.353 [2024-11-17 02:57:27.667789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.353 [2024-11-17 02:57:27.667811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.353 [2024-11-17 02:57:27.681113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.353 [2024-11-17 02:57:27.681548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-11-17 02:57:27.681589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.353 [2024-11-17 02:57:27.681616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.353 [2024-11-17 02:57:27.681903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.353 [2024-11-17 02:57:27.682208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.353 [2024-11-17 02:57:27.682240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.353 [2024-11-17 02:57:27.682263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.353 [2024-11-17 02:57:27.682285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.353 [2024-11-17 02:57:27.695662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.353 [2024-11-17 02:57:27.696113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-11-17 02:57:27.696155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.353 [2024-11-17 02:57:27.696193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.353 [2024-11-17 02:57:27.696482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.353 [2024-11-17 02:57:27.696773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.353 [2024-11-17 02:57:27.696805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.353 [2024-11-17 02:57:27.696827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.353 [2024-11-17 02:57:27.696849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.353 [2024-11-17 02:57:27.710231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.353 [2024-11-17 02:57:27.710673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-11-17 02:57:27.710714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.353 [2024-11-17 02:57:27.710740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.353 [2024-11-17 02:57:27.711027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.353 [2024-11-17 02:57:27.711327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.353 [2024-11-17 02:57:27.711359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.353 [2024-11-17 02:57:27.711382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.353 [2024-11-17 02:57:27.711403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.353 [2024-11-17 02:57:27.724747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.353 [2024-11-17 02:57:27.725224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-11-17 02:57:27.725265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.353 [2024-11-17 02:57:27.725291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.353 [2024-11-17 02:57:27.725590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.354 [2024-11-17 02:57:27.725880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.354 [2024-11-17 02:57:27.725912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.354 [2024-11-17 02:57:27.725935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.354 [2024-11-17 02:57:27.725957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.354 [2024-11-17 02:57:27.730487] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:19.354 [2024-11-17 02:57:27.730541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:19.354 [2024-11-17 02:57:27.730567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:19.354 [2024-11-17 02:57:27.730592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:19.354 [2024-11-17 02:57:27.730612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:19.354 [2024-11-17 02:57:27.733392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:19.354 [2024-11-17 02:57:27.733445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:19.354 [2024-11-17 02:57:27.733451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:19.354 [2024-11-17 02:57:27.739408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.354 [2024-11-17 02:57:27.740004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-11-17 02:57:27.740054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.354 [2024-11-17 02:57:27.740085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.354 [2024-11-17 02:57:27.740399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.354 [2024-11-17 02:57:27.740702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.354 [2024-11-17 02:57:27.740735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.354 [2024-11-17 02:57:27.740774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.354 [2024-11-17 02:57:27.740807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.354 [2024-11-17 02:57:27.754169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.354 [2024-11-17 02:57:27.754861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-11-17 02:57:27.754916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.354 [2024-11-17 02:57:27.754949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.354 [2024-11-17 02:57:27.755266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.354 [2024-11-17 02:57:27.755578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.354 [2024-11-17 02:57:27.755611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.354 [2024-11-17 02:57:27.755640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.354 [2024-11-17 02:57:27.755666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.354 [2024-11-17 02:57:27.768875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.354 [2024-11-17 02:57:27.769334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-11-17 02:57:27.769376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.354 [2024-11-17 02:57:27.769404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.354 [2024-11-17 02:57:27.769707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.354 [2024-11-17 02:57:27.770001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.354 [2024-11-17 02:57:27.770033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.354 [2024-11-17 02:57:27.770057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.354 [2024-11-17 02:57:27.770080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.354 [2024-11-17 02:57:27.783414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.354 [2024-11-17 02:57:27.783881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-11-17 02:57:27.783923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.354 [2024-11-17 02:57:27.783950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.354 [2024-11-17 02:57:27.784256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.354 [2024-11-17 02:57:27.784547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.354 [2024-11-17 02:57:27.784579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.354 [2024-11-17 02:57:27.784601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.354 [2024-11-17 02:57:27.784624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.354 [2024-11-17 02:57:27.798112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.354 [2024-11-17 02:57:27.798567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-11-17 02:57:27.798608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.354 [2024-11-17 02:57:27.798635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.354 [2024-11-17 02:57:27.798923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.354 [2024-11-17 02:57:27.799229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.354 [2024-11-17 02:57:27.799262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.354 [2024-11-17 02:57:27.799285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.354 [2024-11-17 02:57:27.799307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.614 [2024-11-17 02:57:27.812708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.614 [2024-11-17 02:57:27.813131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.614 [2024-11-17 02:57:27.813173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.614 [2024-11-17 02:57:27.813200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.614 [2024-11-17 02:57:27.813488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.614 [2024-11-17 02:57:27.813782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.614 [2024-11-17 02:57:27.813814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.614 [2024-11-17 02:57:27.813837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.614 [2024-11-17 02:57:27.813860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.614 [2024-11-17 02:57:27.827417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.614 [2024-11-17 02:57:27.828150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.614 [2024-11-17 02:57:27.828210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.614 [2024-11-17 02:57:27.828255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.614 [2024-11-17 02:57:27.828560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.614 [2024-11-17 02:57:27.828863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.615 [2024-11-17 02:57:27.828896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.615 [2024-11-17 02:57:27.828925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.615 [2024-11-17 02:57:27.828954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.615 [2024-11-17 02:57:27.842071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.615 [2024-11-17 02:57:27.842826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.615 [2024-11-17 02:57:27.842885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.615 [2024-11-17 02:57:27.842919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.615 [2024-11-17 02:57:27.843236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.615 [2024-11-17 02:57:27.843538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.615 [2024-11-17 02:57:27.843571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.615 [2024-11-17 02:57:27.843600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.615 [2024-11-17 02:57:27.843629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.615 [2024-11-17 02:57:27.856892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.615 [2024-11-17 02:57:27.857545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.615 [2024-11-17 02:57:27.857597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.615 [2024-11-17 02:57:27.857628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.615 [2024-11-17 02:57:27.857926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.615 [2024-11-17 02:57:27.858238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.615 [2024-11-17 02:57:27.858272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.615 [2024-11-17 02:57:27.858298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.615 [2024-11-17 02:57:27.858325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.615 [2024-11-17 02:57:27.871658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.615 [2024-11-17 02:57:27.872116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.615 [2024-11-17 02:57:27.872159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.615 [2024-11-17 02:57:27.872186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.615 [2024-11-17 02:57:27.872502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.615 [2024-11-17 02:57:27.872795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.615 [2024-11-17 02:57:27.872826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.615 [2024-11-17 02:57:27.872849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.615 [2024-11-17 02:57:27.872871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.615 [2024-11-17 02:57:27.886348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.615 [2024-11-17 02:57:27.886797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.615 [2024-11-17 02:57:27.886839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.615 [2024-11-17 02:57:27.886866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.615 [2024-11-17 02:57:27.887172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.615 [2024-11-17 02:57:27.887466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.615 [2024-11-17 02:57:27.887498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.615 [2024-11-17 02:57:27.887521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.615 [2024-11-17 02:57:27.887544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.615 [2024-11-17 02:57:27.900833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.615 [2024-11-17 02:57:27.901318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.615 [2024-11-17 02:57:27.901359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.615 [2024-11-17 02:57:27.901385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.615 [2024-11-17 02:57:27.901674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.615 [2024-11-17 02:57:27.901964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.615 [2024-11-17 02:57:27.901995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.615 [2024-11-17 02:57:27.902018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.615 [2024-11-17 02:57:27.902040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.615 [2024-11-17 02:57:27.915361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.615 [2024-11-17 02:57:27.915797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.615 [2024-11-17 02:57:27.915839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.615 [2024-11-17 02:57:27.915865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.615 [2024-11-17 02:57:27.916163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.615 [2024-11-17 02:57:27.916450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.615 [2024-11-17 02:57:27.916487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.615 [2024-11-17 02:57:27.916511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.615 [2024-11-17 02:57:27.916533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.615 [2024-11-17 02:57:27.929803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.615 [2024-11-17 02:57:27.930278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.615 [2024-11-17 02:57:27.930320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.615 [2024-11-17 02:57:27.930347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.615 [2024-11-17 02:57:27.930633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.615 [2024-11-17 02:57:27.930922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.615 [2024-11-17 02:57:27.930953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.615 [2024-11-17 02:57:27.930976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.615 [2024-11-17 02:57:27.930999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.615 [2024-11-17 02:57:27.944278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.615 [2024-11-17 02:57:27.944724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.616 [2024-11-17 02:57:27.944766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.616 [2024-11-17 02:57:27.944792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.616 [2024-11-17 02:57:27.945080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.616 [2024-11-17 02:57:27.945380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.616 [2024-11-17 02:57:27.945411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.616 [2024-11-17 02:57:27.945434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.616 [2024-11-17 02:57:27.945455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.616 [2024-11-17 02:57:27.958695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.616 [2024-11-17 02:57:27.959131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.616 [2024-11-17 02:57:27.959173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.616 [2024-11-17 02:57:27.959200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.616 [2024-11-17 02:57:27.959487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.616 [2024-11-17 02:57:27.959779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.616 [2024-11-17 02:57:27.959809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.616 [2024-11-17 02:57:27.959832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.616 [2024-11-17 02:57:27.959860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.616 [2024-11-17 02:57:27.973109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.616 [2024-11-17 02:57:27.973552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.616 [2024-11-17 02:57:27.973593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.616 [2024-11-17 02:57:27.973619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.616 [2024-11-17 02:57:27.973906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.616 [2024-11-17 02:57:27.974206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.616 [2024-11-17 02:57:27.974238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.616 [2024-11-17 02:57:27.974262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.616 [2024-11-17 02:57:27.974284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.616 [2024-11-17 02:57:27.987722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.616 [2024-11-17 02:57:27.988479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.616 [2024-11-17 02:57:27.988539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.616 [2024-11-17 02:57:27.988573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.616 [2024-11-17 02:57:27.988898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.616 [2024-11-17 02:57:27.989235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.616 [2024-11-17 02:57:27.989275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.616 [2024-11-17 02:57:27.989306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.616 [2024-11-17 02:57:27.989336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.616 [2024-11-17 02:57:28.002468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.616 [2024-11-17 02:57:28.003159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.616 [2024-11-17 02:57:28.003220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.616 [2024-11-17 02:57:28.003255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.616 [2024-11-17 02:57:28.003562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.616 [2024-11-17 02:57:28.003868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.616 [2024-11-17 02:57:28.003903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.616 [2024-11-17 02:57:28.003932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.616 [2024-11-17 02:57:28.003961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.616 [2024-11-17 02:57:28.017207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.616 [2024-11-17 02:57:28.017682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.616 [2024-11-17 02:57:28.017725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.616 [2024-11-17 02:57:28.017751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.616 [2024-11-17 02:57:28.018044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.616 [2024-11-17 02:57:28.018368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.616 [2024-11-17 02:57:28.018401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.616 [2024-11-17 02:57:28.018424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.616 [2024-11-17 02:57:28.018447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.616 [2024-11-17 02:57:28.031818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.616 [2024-11-17 02:57:28.032285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.616 [2024-11-17 02:57:28.032328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.616 [2024-11-17 02:57:28.032355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.616 [2024-11-17 02:57:28.032643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.616 [2024-11-17 02:57:28.032934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.616 [2024-11-17 02:57:28.032966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.616 [2024-11-17 02:57:28.032988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.616 [2024-11-17 02:57:28.033011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.616 [2024-11-17 02:57:28.046388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.616 [2024-11-17 02:57:28.046854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.616 [2024-11-17 02:57:28.046896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.616 [2024-11-17 02:57:28.046922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.616 [2024-11-17 02:57:28.047223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.616 [2024-11-17 02:57:28.047513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.616 [2024-11-17 02:57:28.047545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.616 [2024-11-17 02:57:28.047568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.616 [2024-11-17 02:57:28.047590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.616 [2024-11-17 02:57:28.060852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.616 [2024-11-17 02:57:28.061314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.616 [2024-11-17 02:57:28.061356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.616 [2024-11-17 02:57:28.061388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.616 [2024-11-17 02:57:28.061676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.616 [2024-11-17 02:57:28.061967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.616 [2024-11-17 02:57:28.061998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.616 [2024-11-17 02:57:28.062021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.616 [2024-11-17 02:57:28.062043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.876 [2024-11-17 02:57:28.075392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.876 [2024-11-17 02:57:28.075846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.876 [2024-11-17 02:57:28.075887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.876 [2024-11-17 02:57:28.075913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.876 [2024-11-17 02:57:28.076211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.876 [2024-11-17 02:57:28.076498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.876 [2024-11-17 02:57:28.076547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.876 [2024-11-17 02:57:28.076571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.876 [2024-11-17 02:57:28.076594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.876 [2024-11-17 02:57:28.089891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.876 [2024-11-17 02:57:28.090342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.876 [2024-11-17 02:57:28.090383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.876 [2024-11-17 02:57:28.090410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.876 [2024-11-17 02:57:28.090698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.876 [2024-11-17 02:57:28.090988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.876 [2024-11-17 02:57:28.091019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.876 [2024-11-17 02:57:28.091042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.876 [2024-11-17 02:57:28.091064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.876 [2024-11-17 02:57:28.104416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.876 [2024-11-17 02:57:28.104912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.876 [2024-11-17 02:57:28.104955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.876 [2024-11-17 02:57:28.104983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.876 [2024-11-17 02:57:28.105294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.876 [2024-11-17 02:57:28.105598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.876 [2024-11-17 02:57:28.105630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.876 [2024-11-17 02:57:28.105654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.876 [2024-11-17 02:57:28.105677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.876 [2024-11-17 02:57:28.118922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.876 [2024-11-17 02:57:28.119399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.876 [2024-11-17 02:57:28.119442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.876 [2024-11-17 02:57:28.119470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.876 [2024-11-17 02:57:28.119759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.876 [2024-11-17 02:57:28.120053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.876 [2024-11-17 02:57:28.120085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.876 [2024-11-17 02:57:28.120121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.876 [2024-11-17 02:57:28.120152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.876 [2024-11-17 02:57:28.133684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.876 [2024-11-17 02:57:28.134112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.876 [2024-11-17 02:57:28.134154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.876 [2024-11-17 02:57:28.134181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.876 [2024-11-17 02:57:28.134471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.876 [2024-11-17 02:57:28.134764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.876 [2024-11-17 02:57:28.134797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.876 [2024-11-17 02:57:28.134820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.876 [2024-11-17 02:57:28.134842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.876 [2024-11-17 02:57:28.148392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.876 [2024-11-17 02:57:28.148820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.876 [2024-11-17 02:57:28.148860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.876 [2024-11-17 02:57:28.148886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.876 [2024-11-17 02:57:28.149188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.876 [2024-11-17 02:57:28.149486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.876 [2024-11-17 02:57:28.149517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.876 [2024-11-17 02:57:28.149546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.876 [2024-11-17 02:57:28.149569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.876 [2024-11-17 02:57:28.163041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.876 [2024-11-17 02:57:28.163497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.876 [2024-11-17 02:57:28.163537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.876 [2024-11-17 02:57:28.163563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.876 [2024-11-17 02:57:28.163849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.876 [2024-11-17 02:57:28.164150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.877 [2024-11-17 02:57:28.164182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.877 [2024-11-17 02:57:28.164205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.877 [2024-11-17 02:57:28.164228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.877 [2024-11-17 02:57:28.177517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.877 [2024-11-17 02:57:28.177966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.877 [2024-11-17 02:57:28.178008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.877 [2024-11-17 02:57:28.178034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.877 [2024-11-17 02:57:28.178330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.877 [2024-11-17 02:57:28.178617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.877 [2024-11-17 02:57:28.178649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.877 [2024-11-17 02:57:28.178671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.877 [2024-11-17 02:57:28.178693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.877 [2024-11-17 02:57:28.191978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.877 [2024-11-17 02:57:28.192424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.877 [2024-11-17 02:57:28.192466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.877 [2024-11-17 02:57:28.192502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.877 [2024-11-17 02:57:28.192785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.877 [2024-11-17 02:57:28.193072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.877 [2024-11-17 02:57:28.193125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.877 [2024-11-17 02:57:28.193149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.877 [2024-11-17 02:57:28.193171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.877 [2024-11-17 02:57:28.206446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.877 [2024-11-17 02:57:28.206894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.877 [2024-11-17 02:57:28.206935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.877 [2024-11-17 02:57:28.206962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.877 [2024-11-17 02:57:28.207258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.877 [2024-11-17 02:57:28.207544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.877 [2024-11-17 02:57:28.207585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.877 [2024-11-17 02:57:28.207608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.877 [2024-11-17 02:57:28.207630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.877 [2024-11-17 02:57:28.220849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.877 [2024-11-17 02:57:28.221298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.877 [2024-11-17 02:57:28.221339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.877 [2024-11-17 02:57:28.221366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.877 [2024-11-17 02:57:28.221658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.877 [2024-11-17 02:57:28.221945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.877 [2024-11-17 02:57:28.221976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.877 [2024-11-17 02:57:28.221998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.877 [2024-11-17 02:57:28.222021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.877 [2024-11-17 02:57:28.235302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.877 [2024-11-17 02:57:28.235740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.877 [2024-11-17 02:57:28.235782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.877 [2024-11-17 02:57:28.235809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.877 [2024-11-17 02:57:28.236110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.877 [2024-11-17 02:57:28.236399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.877 [2024-11-17 02:57:28.236430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.877 [2024-11-17 02:57:28.236453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.877 [2024-11-17 02:57:28.236475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.877 [2024-11-17 02:57:28.249720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.877 [2024-11-17 02:57:28.250232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.877 [2024-11-17 02:57:28.250280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.877 [2024-11-17 02:57:28.250308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.877 [2024-11-17 02:57:28.250593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.877 [2024-11-17 02:57:28.250898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.877 [2024-11-17 02:57:28.250931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.877 [2024-11-17 02:57:28.250954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.877 [2024-11-17 02:57:28.250977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.877 [2024-11-17 02:57:28.264317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.877 [2024-11-17 02:57:28.264778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.877 [2024-11-17 02:57:28.264822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.877 [2024-11-17 02:57:28.264849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.877 [2024-11-17 02:57:28.265148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.877 [2024-11-17 02:57:28.265436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.877 [2024-11-17 02:57:28.265469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.877 [2024-11-17 02:57:28.265492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.877 [2024-11-17 02:57:28.265515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.877 2168.00 IOPS, 8.47 MiB/s [2024-11-17T01:57:28.337Z] [2024-11-17 02:57:28.278970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.877 [2024-11-17 02:57:28.279389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.877 [2024-11-17 02:57:28.279432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.877 [2024-11-17 02:57:28.279459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.877 [2024-11-17 02:57:28.279744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.877 [2024-11-17 02:57:28.280033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.877 [2024-11-17 02:57:28.280066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.877 [2024-11-17 02:57:28.280089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.877 [2024-11-17 02:57:28.280140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.877 [2024-11-17 02:57:28.293370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.877 [2024-11-17 02:57:28.293839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.877 [2024-11-17 02:57:28.293880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.877 [2024-11-17 02:57:28.293914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.877 [2024-11-17 02:57:28.294215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.877 [2024-11-17 02:57:28.294503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.877 [2024-11-17 02:57:28.294534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.877 [2024-11-17 02:57:28.294557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.877 [2024-11-17 02:57:28.294579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.877 [2024-11-17 02:57:28.307789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.877 [2024-11-17 02:57:28.308226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.877 [2024-11-17 02:57:28.308267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.878 [2024-11-17 02:57:28.308293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.878 [2024-11-17 02:57:28.308577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.878 [2024-11-17 02:57:28.308865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.878 [2024-11-17 02:57:28.308897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.878 [2024-11-17 02:57:28.308920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.878 [2024-11-17 02:57:28.308942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:19.878 [2024-11-17 02:57:28.322413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:19.878 [2024-11-17 02:57:28.322884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.878 [2024-11-17 02:57:28.322925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:19.878 [2024-11-17 02:57:28.322952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:19.878 [2024-11-17 02:57:28.323251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:19.878 [2024-11-17 02:57:28.323537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:19.878 [2024-11-17 02:57:28.323568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:19.878 [2024-11-17 02:57:28.323591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:19.878 [2024-11-17 02:57:28.323612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.137 [2024-11-17 02:57:28.336861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.137 [2024-11-17 02:57:28.337307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-17 02:57:28.337351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.137 [2024-11-17 02:57:28.337378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.137 [2024-11-17 02:57:28.337664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.137 [2024-11-17 02:57:28.337957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.137 [2024-11-17 02:57:28.337989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.137 [2024-11-17 02:57:28.338012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.137 [2024-11-17 02:57:28.338034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.137 [2024-11-17 02:57:28.351254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.137 [2024-11-17 02:57:28.351706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-17 02:57:28.351748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.137 [2024-11-17 02:57:28.351774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.137 [2024-11-17 02:57:28.352057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.137 [2024-11-17 02:57:28.352354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.137 [2024-11-17 02:57:28.352386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.137 [2024-11-17 02:57:28.352409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.137 [2024-11-17 02:57:28.352431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.137 [2024-11-17 02:57:28.365627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.137 [2024-11-17 02:57:28.366112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-17 02:57:28.366155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.137 [2024-11-17 02:57:28.366181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.137 [2024-11-17 02:57:28.366467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.137 [2024-11-17 02:57:28.366756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.137 [2024-11-17 02:57:28.366787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.137 [2024-11-17 02:57:28.366810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.137 [2024-11-17 02:57:28.366833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.137 [2024-11-17 02:57:28.379758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.137 [2024-11-17 02:57:28.380178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-17 02:57:28.380217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.137 [2024-11-17 02:57:28.380242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.138 [2024-11-17 02:57:28.380501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.138 [2024-11-17 02:57:28.380762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.138 [2024-11-17 02:57:28.380790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.138 [2024-11-17 02:57:28.380818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.138 [2024-11-17 02:57:28.380839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.138 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:20.138 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:20.138 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:20.138 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:20.138 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:20.138 [2024-11-17 02:57:28.393805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.138 [2024-11-17 02:57:28.394217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-17 02:57:28.394254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.138 [2024-11-17 02:57:28.394278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.138 [2024-11-17 02:57:28.394551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.138 [2024-11-17 02:57:28.394806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.138 [2024-11-17 02:57:28.394834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.138 [2024-11-17 02:57:28.394853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.138 [2024-11-17 02:57:28.394873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.138 [2024-11-17 02:57:28.407826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.138 [2024-11-17 02:57:28.408263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-17 02:57:28.408302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.138 [2024-11-17 02:57:28.408326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.138 [2024-11-17 02:57:28.408598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.138 [2024-11-17 02:57:28.408849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.138 [2024-11-17 02:57:28.408877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.138 [2024-11-17 02:57:28.408896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.138 [2024-11-17 02:57:28.408916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.138 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:20.138 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:20.138 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.138 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:20.138 [2024-11-17 02:57:28.416608] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:20.138 [2024-11-17 02:57:28.421868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.138 [2024-11-17 02:57:28.422321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-17 02:57:28.422364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.138 [2024-11-17 02:57:28.422388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.138 [2024-11-17 02:57:28.422660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.138 [2024-11-17 02:57:28.422911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.138 [2024-11-17 02:57:28.422939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.138 [2024-11-17 02:57:28.422959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.138 [2024-11-17 02:57:28.422978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.138 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.138 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:20.138 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.138 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:20.138 [2024-11-17 02:57:28.435944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.138 [2024-11-17 02:57:28.436440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-17 02:57:28.436496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.138 [2024-11-17 02:57:28.436521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.138 [2024-11-17 02:57:28.436811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.138 [2024-11-17 02:57:28.437062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.138 [2024-11-17 02:57:28.437114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.138 [2024-11-17 02:57:28.437138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.138 [2024-11-17 02:57:28.437169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.138 [2024-11-17 02:57:28.449998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.138 [2024-11-17 02:57:28.450466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-17 02:57:28.450502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.138 [2024-11-17 02:57:28.450526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.138 [2024-11-17 02:57:28.450795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.138 [2024-11-17 02:57:28.451044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.138 [2024-11-17 02:57:28.451070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.138 [2024-11-17 02:57:28.451116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.138 [2024-11-17 02:57:28.451139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.138 [2024-11-17 02:57:28.464118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.138 [2024-11-17 02:57:28.464874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-17 02:57:28.464930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.138 [2024-11-17 02:57:28.464962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.138 [2024-11-17 02:57:28.465253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.138 [2024-11-17 02:57:28.465542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.138 [2024-11-17 02:57:28.465572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.138 [2024-11-17 02:57:28.465598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.138 [2024-11-17 02:57:28.465625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.138 [2024-11-17 02:57:28.478401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.138 [2024-11-17 02:57:28.478899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-17 02:57:28.478939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.138 [2024-11-17 02:57:28.478964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.138 [2024-11-17 02:57:28.479237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.138 [2024-11-17 02:57:28.479515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.138 [2024-11-17 02:57:28.479544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.139 [2024-11-17 02:57:28.479566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.139 [2024-11-17 02:57:28.479586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.139 [2024-11-17 02:57:28.492480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.139 [2024-11-17 02:57:28.492904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-17 02:57:28.492942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.139 [2024-11-17 02:57:28.492966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.139 [2024-11-17 02:57:28.493237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.139 [2024-11-17 02:57:28.493511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.139 [2024-11-17 02:57:28.493540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.139 [2024-11-17 02:57:28.493560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.139 [2024-11-17 02:57:28.493579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.139 [2024-11-17 02:57:28.506490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.139 [2024-11-17 02:57:28.506964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-17 02:57:28.507003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.139 [2024-11-17 02:57:28.507032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.139 [2024-11-17 02:57:28.507308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.139 [2024-11-17 02:57:28.507592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.139 [2024-11-17 02:57:28.507637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.139 [2024-11-17 02:57:28.507658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.139 [2024-11-17 02:57:28.507677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.139 [2024-11-17 02:57:28.520560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.139 [2024-11-17 02:57:28.520981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-17 02:57:28.521020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.139 [2024-11-17 02:57:28.521044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.139 [2024-11-17 02:57:28.521320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.139 [2024-11-17 02:57:28.521593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.139 [2024-11-17 02:57:28.521621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.139 [2024-11-17 02:57:28.521641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.139 [2024-11-17 02:57:28.521660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.139 Malloc0 00:37:20.139 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.139 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:20.139 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.139 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:20.139 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.139 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:20.139 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.139 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:20.139 [2024-11-17 02:57:28.534790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.139 [2024-11-17 02:57:28.535219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-17 02:57:28.535258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:20.139 [2024-11-17 02:57:28.535282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:20.139 [2024-11-17 02:57:28.535540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:20.139 [2024-11-17 02:57:28.535801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:20.139 [2024-11-17 02:57:28.535830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:20.139 [2024-11-17 02:57:28.535851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:20.139 [2024-11-17 02:57:28.535876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:20.139 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.139 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:20.139 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.139 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:20.139 [2024-11-17 02:57:28.545036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:20.139 [2024-11-17 02:57:28.548902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:20.139 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.139 02:57:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3137945 00:37:20.397 [2024-11-17 02:57:28.617170] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:37:21.895 2424.43 IOPS, 9.47 MiB/s [2024-11-17T01:57:31.317Z] 2878.00 IOPS, 11.24 MiB/s [2024-11-17T01:57:32.691Z] 3228.11 IOPS, 12.61 MiB/s [2024-11-17T01:57:33.625Z] 3515.70 IOPS, 13.73 MiB/s [2024-11-17T01:57:34.559Z] 3746.55 IOPS, 14.63 MiB/s [2024-11-17T01:57:35.494Z] 3941.00 IOPS, 15.39 MiB/s [2024-11-17T01:57:36.428Z] 4107.77 IOPS, 16.05 MiB/s [2024-11-17T01:57:37.362Z] 4247.07 IOPS, 16.59 MiB/s [2024-11-17T01:57:37.362Z] 4370.07 IOPS, 17.07 MiB/s 00:37:28.902 Latency(us) 00:37:28.902 [2024-11-17T01:57:37.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:28.902 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:28.902 Verification LBA range: start 0x0 length 0x4000 00:37:28.902 Nvme1n1 : 15.01 4371.73 17.08 9125.00 0.00 9452.90 1104.40 45632.47 00:37:28.902 [2024-11-17T01:57:37.362Z] =================================================================================================================== 00:37:28.902 [2024-11-17T01:57:37.362Z] Total : 4371.73 17.08 9125.00 0.00 9452.90 1104.40 45632.47 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:29.834 rmmod nvme_tcp 00:37:29.834 rmmod nvme_fabrics 00:37:29.834 rmmod nvme_keyring 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3138611 ']' 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3138611 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3138611 ']' 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3138611 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3138611 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3138611' 00:37:29.834 killing process with pid 3138611 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3138611 00:37:29.834 02:57:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3138611 00:37:31.210 02:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:31.210 02:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:31.210 02:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:31.210 02:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:37:31.210 02:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:37:31.210 02:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:31.210 02:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:37:31.210 02:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:31.210 02:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:31.210 02:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:31.210 02:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:31.210 02:57:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:33.190 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:33.190 00:37:33.190 real 0m26.526s 00:37:33.190 user 1m12.434s 00:37:33.190 sys 0m4.880s 00:37:33.191 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:33.191 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:33.191 ************************************ 00:37:33.191 END TEST nvmf_bdevperf 00:37:33.191 ************************************ 00:37:33.191 02:57:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:33.191 02:57:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:33.191 02:57:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:33.191 02:57:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.191 ************************************ 00:37:33.191 START TEST nvmf_target_disconnect 00:37:33.191 ************************************ 00:37:33.191 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:33.191 * Looking for test storage... 00:37:33.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:33.191 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:33.191 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:37:33.191 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:33.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.450 --rc genhtml_branch_coverage=1 00:37:33.450 --rc genhtml_function_coverage=1 00:37:33.450 --rc genhtml_legend=1 00:37:33.450 --rc geninfo_all_blocks=1 00:37:33.450 --rc geninfo_unexecuted_blocks=1 00:37:33.450 00:37:33.450 ' 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:33.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.450 --rc genhtml_branch_coverage=1 00:37:33.450 --rc genhtml_function_coverage=1 00:37:33.450 --rc genhtml_legend=1 00:37:33.450 --rc geninfo_all_blocks=1 00:37:33.450 --rc geninfo_unexecuted_blocks=1 00:37:33.450 00:37:33.450 ' 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:33.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.450 --rc genhtml_branch_coverage=1 00:37:33.450 --rc genhtml_function_coverage=1 00:37:33.450 --rc genhtml_legend=1 00:37:33.450 --rc geninfo_all_blocks=1 00:37:33.450 --rc geninfo_unexecuted_blocks=1 00:37:33.450 00:37:33.450 ' 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:33.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.450 --rc genhtml_branch_coverage=1 00:37:33.450 --rc genhtml_function_coverage=1 00:37:33.450 --rc genhtml_legend=1 00:37:33.450 --rc geninfo_all_blocks=1 00:37:33.450 --rc geninfo_unexecuted_blocks=1 00:37:33.450 00:37:33.450 ' 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:33.450 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:33.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:37:33.451 02:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:35.354 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:35.354 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:35.354 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:35.354 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:35.354 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:35.355 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:35.355 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:35.355 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:35.355 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:35.355 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:35.355 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:35.355 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:35.355 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:35.355 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:35.355 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:35.355 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:35.355 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:35.355 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:35.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:35.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:37:35.614 00:37:35.614 --- 10.0.0.2 ping statistics --- 00:37:35.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:35.614 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:35.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:35.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:37:35.614 00:37:35.614 --- 10.0.0.1 ping statistics --- 00:37:35.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:35.614 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:35.614 ************************************ 00:37:35.614 START TEST nvmf_target_disconnect_tc1 00:37:35.614 ************************************ 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:37:35.614 02:57:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:35.873 [2024-11-17 02:57:44.142051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.873 [2024-11-17 02:57:44.142175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:37:35.873 [2024-11-17 02:57:44.142268] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:35.873 [2024-11-17 02:57:44.142320] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:35.873 [2024-11-17 02:57:44.142347] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:37:35.873 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:37:35.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:35.873 Initializing NVMe Controllers 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:35.873 00:37:35.873 real 0m0.242s 00:37:35.873 user 0m0.107s 00:37:35.873 sys 0m0.134s 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:35.873 ************************************ 00:37:35.873 END TEST nvmf_target_disconnect_tc1 00:37:35.873 ************************************ 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:35.873 ************************************ 00:37:35.873 START TEST nvmf_target_disconnect_tc2 00:37:35.873 ************************************ 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3142032 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3142032 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3142032 ']' 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:35.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:35.873 02:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:36.132 [2024-11-17 02:57:44.333518] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:37:36.132 [2024-11-17 02:57:44.333683] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:36.132 [2024-11-17 02:57:44.484122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:36.391 [2024-11-17 02:57:44.608659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:36.391 [2024-11-17 02:57:44.608745] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:36.391 [2024-11-17 02:57:44.608767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:36.391 [2024-11-17 02:57:44.608791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:36.391 [2024-11-17 02:57:44.608808] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:36.391 [2024-11-17 02:57:44.611344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:36.391 [2024-11-17 02:57:44.611395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:36.391 [2024-11-17 02:57:44.611449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:36.391 [2024-11-17 02:57:44.611455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:36.958 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:36.958 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:37:36.958 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:36.958 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:36.958 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:36.958 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:36.958 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:36.958 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.958 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:36.958 Malloc0 00:37:36.958 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.958 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:36.958 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.958 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:36.958 [2024-11-17 02:57:45.398746] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:36.958 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.958 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:36.958 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.958 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:36.958 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.958 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:36.958 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.958 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:37.217 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.217 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:37.217 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.217 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:37.217 [2024-11-17 02:57:45.428574] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:37.217 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.217 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:37.217 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.217 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:37.217 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.217 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3142188 00:37:37.217 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:37.217 02:57:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:39.129 02:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3142032 00:37:39.129 02:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:37:39.129 Read completed with error (sct=0, sc=8) 00:37:39.129 starting I/O failed 00:37:39.129 Read completed with error (sct=0, sc=8) 00:37:39.129 starting I/O failed 00:37:39.129 Read completed with error (sct=0, sc=8) 00:37:39.129 starting I/O failed 00:37:39.129 Read completed with error (sct=0, sc=8) 00:37:39.129 starting I/O failed 00:37:39.129 Read completed with error (sct=0, sc=8) 00:37:39.129 starting I/O failed 00:37:39.129 Read completed with error (sct=0, sc=8) 00:37:39.129 starting I/O failed 00:37:39.129 Read completed with error (sct=0, sc=8) 00:37:39.129 starting I/O failed 00:37:39.129 Read completed with error (sct=0, sc=8) 00:37:39.129 starting I/O failed 00:37:39.129 Read completed with error (sct=0, sc=8) 00:37:39.129 starting I/O failed 00:37:39.129 Read completed with error (sct=0, sc=8) 00:37:39.129 starting I/O failed 00:37:39.129 Read completed with error (sct=0, sc=8) 00:37:39.129 starting I/O failed 00:37:39.129 Read completed with error (sct=0, sc=8) 00:37:39.129 starting I/O failed 00:37:39.129 Read completed with error (sct=0, sc=8) 00:37:39.129 starting I/O failed 00:37:39.129 Write completed with error (sct=0, sc=8) 00:37:39.129 starting I/O failed 00:37:39.129 Read completed with error (sct=0, sc=8) 00:37:39.129 starting I/O failed 00:37:39.129 Read completed with error (sct=0, sc=8) 00:37:39.129 starting I/O failed 00:37:39.129 Write completed with error (sct=0, sc=8) 00:37:39.129 starting I/O failed 00:37:39.129 Write completed with error (sct=0, sc=8) 00:37:39.129 starting I/O failed 00:37:39.129 Write completed with error (sct=0, sc=8) 00:37:39.129 starting I/O failed 00:37:39.129 Write completed with error (sct=0, sc=8) 00:37:39.129 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 [2024-11-17 02:57:47.466186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 [2024-11-17 02:57:47.466825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Read completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.130 Write completed with error (sct=0, sc=8) 00:37:39.130 starting I/O failed 00:37:39.131 Write completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Write completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 [2024-11-17 02:57:47.467452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Write completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Write completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Read completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Write completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Write completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Write completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 Write completed with error (sct=0, sc=8) 00:37:39.131 starting I/O failed 00:37:39.131 [2024-11-17 02:57:47.468068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:39.131 [2024-11-17 02:57:47.468278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.131 [2024-11-17 02:57:47.468333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.131 qpair failed and we were unable to recover it. 00:37:39.131 [2024-11-17 02:57:47.468544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.131 [2024-11-17 02:57:47.468585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.131 qpair failed and we were unable to recover it. 00:37:39.131 [2024-11-17 02:57:47.468767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.131 [2024-11-17 02:57:47.468827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.131 qpair failed and we were unable to recover it. 00:37:39.131 [2024-11-17 02:57:47.468932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.131 [2024-11-17 02:57:47.468971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.131 qpair failed and we were unable to recover it. 00:37:39.131 [2024-11-17 02:57:47.469124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.131 [2024-11-17 02:57:47.469177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.131 qpair failed and we were unable to recover it. 00:37:39.131 [2024-11-17 02:57:47.469342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.131 [2024-11-17 02:57:47.469386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.131 qpair failed and we were unable to recover it. 00:37:39.131 [2024-11-17 02:57:47.469526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.131 [2024-11-17 02:57:47.469579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.131 qpair failed and we were unable to recover it. 00:37:39.131 [2024-11-17 02:57:47.469786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.131 [2024-11-17 02:57:47.469822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.131 qpair failed and we were unable to recover it. 00:37:39.131 [2024-11-17 02:57:47.469969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.131 [2024-11-17 02:57:47.470003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.131 qpair failed and we were unable to recover it. 00:37:39.131 [2024-11-17 02:57:47.470154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.131 [2024-11-17 02:57:47.470189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.131 qpair failed and we were unable to recover it. 00:37:39.131 [2024-11-17 02:57:47.470330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.131 [2024-11-17 02:57:47.470365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.131 qpair failed and we were unable to recover it. 00:37:39.131 [2024-11-17 02:57:47.470484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.131 [2024-11-17 02:57:47.470519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.131 qpair failed and we were unable to recover it. 00:37:39.131 [2024-11-17 02:57:47.470669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.131 [2024-11-17 02:57:47.470718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.131 qpair failed and we were unable to recover it. 00:37:39.131 [2024-11-17 02:57:47.470858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.131 [2024-11-17 02:57:47.470891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.131 qpair failed and we were unable to recover it. 00:37:39.131 [2024-11-17 02:57:47.471034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.131 [2024-11-17 02:57:47.471068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.131 qpair failed and we were unable to recover it. 00:37:39.131 [2024-11-17 02:57:47.471206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.471242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.471359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.471393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.471521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.471556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.471799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.471833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.471985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.472019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.472180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.472215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.472331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.472366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.472560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.472595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.472736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.472770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.472887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.472920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.473042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.473093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.473289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.473338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.473463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.473500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.473677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.473713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.473872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.473907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.474074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.474126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.474313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.474350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.474503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.474554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.474778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.474818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.474956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.474990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.475116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.475165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.475311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.475360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.475520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.475569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.475743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.475779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.475915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.475950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.476092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.476133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.476294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.476327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.132 qpair failed and we were unable to recover it. 00:37:39.132 [2024-11-17 02:57:47.476462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.132 [2024-11-17 02:57:47.476497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.476636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.476684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.476861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.476897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.477040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.477074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.477214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.477248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.477360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.477394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.477601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.477635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.477741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.477775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.477937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.477972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.478161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.478213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.479189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.479242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.479416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.479454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.479593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.479648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.479793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.479831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.479971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.480026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.480202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.480239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.480375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.480410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.480547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.480582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.480751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.480785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.480898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.480933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.481076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.481117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.481273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.481323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.481491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.481528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.481649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.481684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.481825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.481860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.482001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.482036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.482185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.482219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.482365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.482409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.482659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.482715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.482954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.483013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.483179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.483215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.483340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.483380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.483555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.483589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.483734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.483769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.483903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.483939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.484067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.133 [2024-11-17 02:57:47.484110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.133 qpair failed and we were unable to recover it. 00:37:39.133 [2024-11-17 02:57:47.484218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.484253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.484375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.484418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.484604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.484658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.484829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.484903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.485068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.485121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.485262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.485296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.485487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.485535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.485699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.485737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.485852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.485888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.486035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.486070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.486245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.486279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.486404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.486438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.486573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.486607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.486746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.486780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.486939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.486973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.487158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.487208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.487363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.487420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.487575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.487613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.487753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.487788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.487954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.487988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.488138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.488187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.488335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.488372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.488562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.488616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.488831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.488892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.489016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.489056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.489241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.489277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.489445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.489480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.489600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.489638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.489878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.489913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.490056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.490110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.490242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.490277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.490447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.490512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.134 [2024-11-17 02:57:47.490694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.134 [2024-11-17 02:57:47.490731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.134 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.490860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.490895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.491051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.491107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.491229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.491284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.491482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.491536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.491667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.491708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.491947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.491986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.492150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.492185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.492321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.492356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.492566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.492630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.492830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.492889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.493042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.493089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.493265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.493299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.493464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.493514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.493657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.493694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.493842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.493879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.493992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.494027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.494219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.494269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.494467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.494509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.494626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.494665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.494863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.494930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.495105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.495169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.495341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.495389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.495572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.495626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.495788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.495839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.495969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.496018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.496187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.496237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.496391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.496441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.496586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.496625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.496890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.496926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.497068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.497118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.497238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.497272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.497446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.497481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.497665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.497704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.497927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.497962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.498076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.498139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.498254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.498292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.498463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.498499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.498631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.498666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.498842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.498908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.499054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.499115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.499263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.499299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.499437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.135 [2024-11-17 02:57:47.499472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.135 qpair failed and we were unable to recover it. 00:37:39.135 [2024-11-17 02:57:47.499635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.499674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.499811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.499845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.500003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.500043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.500255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.500305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.500439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.500498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.500642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.500679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.500840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.500875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.501016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.501050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.501269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.501304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.501421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.501465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.501655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.501711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.501863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.501900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.502047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.502107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.502237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.502271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.502430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.502468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.502610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.502648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.502757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.502794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.502984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.503038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.503231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.503280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.503455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.503511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.503684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.503723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.503888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.503922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.504063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.504109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.504209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.504243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.504354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.504398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.504547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.504584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.504706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.504744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.504896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.504950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.505104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.505143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.505247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.505283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.505420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.505467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.505653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.505695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.505869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.505908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.506030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.506069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.506240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.506289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.506521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.506577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.506752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.506793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.506950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.506986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.507117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.507153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.507286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.507321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.507466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.507506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.507655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.507695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.507839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.507875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.507979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.508015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.508133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.508168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.508321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.508354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.508462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.508495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.508727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.508761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.508861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.508895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.509041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.509085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.509234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.509271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.509453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.509488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.136 qpair failed and we were unable to recover it. 00:37:39.136 [2024-11-17 02:57:47.509681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.136 [2024-11-17 02:57:47.509743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.510009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.510070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.510254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.510289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.510402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.510437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.510637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.510711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.510816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.510851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.511013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.511049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.511181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.511217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.511391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.511445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.511604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.511646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.511865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.511930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.512063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.512105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.512212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.512249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.512411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.512447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.512706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.512786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.512894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.512933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.513089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.513131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.513273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.513308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.513476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.513511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.513614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.513666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.513892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.513927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.514039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.514073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.514217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.514253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.514371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.514407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.514566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.514623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.514864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.514926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.515057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.515092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.515256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.515304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.515487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.515541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.515773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.515832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.515984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.516020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.516135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.516171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.516390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.516444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.516710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.516777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.516915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.516950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.517087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.517134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.517265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.517324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.517481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.517543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.517700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.517755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.517944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.517993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.518220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.518271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.518404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.518453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.518694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.518757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.518931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.518965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.519109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.519145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.519302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.519355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.519475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.519509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.519715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.519753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.519897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.519932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.520094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.520136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.520264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.520298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.520433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.520482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.137 [2024-11-17 02:57:47.520619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.137 [2024-11-17 02:57:47.520656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.137 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.520801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.520850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.520965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.521000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.521144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.521204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.521347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.521386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.521525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.521560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.521696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.521731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.521861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.521895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.522024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.522074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.522301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.522355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.522538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.522586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.522730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.522766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.522930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.522965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.523067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.523116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.523341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.523405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.523590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.523642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.523829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.523891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.524060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.524110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.524262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.524318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.524498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.524542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.524693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.524781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.524946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.524981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.525119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.525155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.525337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.525401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.525543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.525598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.525701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.525740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.525925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.525959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.526086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.526127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.526277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.526311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.526498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.526538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.526723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.526762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.526880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.526914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.527027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.527063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.527225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.527259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.527404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.527442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.527587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.527627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.527786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.527824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.527950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.527988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.528145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.528180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.138 [2024-11-17 02:57:47.528323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.138 [2024-11-17 02:57:47.528359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.138 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.528519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.528557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.528685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.528726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.528893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.528931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.529109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.529168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.529299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.529334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.529521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.529556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.529752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.529790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.529990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.530026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.530177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.530212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.530315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.530349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.530489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.530528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.530744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.530782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.530964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.531002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.531129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.531180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.531338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.531372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.531518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.531562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.531706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.531747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.531887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.531920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.532042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.532076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.532190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.532225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.532331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.532364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.532481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.532534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.532716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.532754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.532880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.532931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.533111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.533180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.533381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.533462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.533633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.533688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.533823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.533858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.534000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.534035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.534192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.534227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.534329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.534364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.534536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.534570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.534698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.534732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.534844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.534880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.535017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.535050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.535191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.535230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.535363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.535412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.535563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.535601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.535753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.535791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.535962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.535999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.536132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.536167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.536321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.536372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.536564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.536614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.536824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.536882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.537044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.537108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.537269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.537308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.537463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.537501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.537665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.537700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.537836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.537871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.538073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.538121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.538239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.538292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.538460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.538498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.538635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.538672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.538775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.538812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.539000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.539037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.139 [2024-11-17 02:57:47.539203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.139 [2024-11-17 02:57:47.539252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.139 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.539457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.539499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.539775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.539834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.540038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.540073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.540223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.540258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.540395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.540435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.540639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.540677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.540815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.540853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.541027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.541064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.541223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.541258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.541393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.541456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.541679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.541717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.542012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.542074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.542228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.542261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.542394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.542429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.542663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.542701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.542848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.542886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.543026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.543064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.543230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.543266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.543453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.543491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.543654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.543751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.543934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.543972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.544170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.544205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.544324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.544373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.544597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.544640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.544794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.544868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.545023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.545058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.545253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.545303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.545475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.545535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.545805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.545860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.546022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.546057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.546232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.546287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.546525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.546599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.140 qpair failed and we were unable to recover it. 00:37:39.140 [2024-11-17 02:57:47.546832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.140 [2024-11-17 02:57:47.546873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.547028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.547062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.547191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.547226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.547361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.547414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.547543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.547599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.547812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.547846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.548038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.548076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.548255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.548305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.548471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.548527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.548818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.548879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.549005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.549045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.549216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.549251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.549415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.549449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.549736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.549792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.549963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.550000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.550195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.550231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.550392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.550442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.550678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.550727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.550973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.551010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.551128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.551163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.551337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.551387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.551547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.551598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.551876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.551932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.552074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.552143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.552305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.552339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.141 qpair failed and we were unable to recover it. 00:37:39.141 [2024-11-17 02:57:47.552652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.141 [2024-11-17 02:57:47.552722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.552874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.552924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.553041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.553090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.553283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.553316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.553512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.553549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.553766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.553825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.553964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.554001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.554164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.554199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.554354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.554409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.554565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.554602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.554739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.554781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.554915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.554952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.555121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.555155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.555287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.555321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.555504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.555541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.555725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.555759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.555870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.555905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.556040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.556083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.556229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.556264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.556447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.556484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.556629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.556677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.556815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.556849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.557046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.557088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.557255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.557290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.557438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.557472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.557649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.557682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.557855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.557906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.558059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.558108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.558278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.558312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.558460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.558497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.558649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.558682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.558796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.558830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.559003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.559056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.559222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.559256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.559388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.559440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.559584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.559623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.559844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.559882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.560038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.560087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.560246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.142 [2024-11-17 02:57:47.560279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.142 qpair failed and we were unable to recover it. 00:37:39.142 [2024-11-17 02:57:47.560418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.560452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.560632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.560670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.560782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.560820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.560994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.561032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.561201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.561235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.561340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.561401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.561600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.561653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.561760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.561793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.561896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.561929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.562113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.562175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.562337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.562386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.562583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.562633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.562807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.562842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.562982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.563017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.563153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.563187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.563339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.563385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.563523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.563559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.563711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.563748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.563870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.563908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.564112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.564162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.564323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.564378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.564483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.564517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.564669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.564722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.564855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.564888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.565074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.565133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.565284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.565319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.565519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.565554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.565684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.565734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.565841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.565874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.565979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.566013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.566186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.566220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.566349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.143 [2024-11-17 02:57:47.566405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.143 qpair failed and we were unable to recover it. 00:37:39.143 [2024-11-17 02:57:47.566523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.566561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.566775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.566816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.566972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.567053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.567269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.567307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.567421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.567457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.567626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.567660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.567803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.567838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.567954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.567993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.568187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.568223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.568389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.568423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.568601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.568667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.568841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.568881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.569044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.569088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.569251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.569286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.569388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.569421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.569527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.569580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.569776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.569811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.569912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.569946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.570089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.570130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.570286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.570341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.570540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.570580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.570713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.570757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.570925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.570964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.571116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.571153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.571340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.571403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.571549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.571585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.571753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.571807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.571957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.571992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.572133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.572168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.572319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.572368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.572516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.572551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.572686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.572721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.572852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.572892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.573019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.573052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.573247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.573301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.573455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.573508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.573661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.573701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.573822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.573861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.574037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.574077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.574247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.574283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.574456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.574494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.574651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.574688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.574840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.574892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.575019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.575058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.575201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.144 [2024-11-17 02:57:47.575252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.144 qpair failed and we were unable to recover it. 00:37:39.144 [2024-11-17 02:57:47.575371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.575408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.575565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.575604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.575799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.575835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.575977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.576016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.576201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.576234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.576428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.576495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.576622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.576665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.576837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.576880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.577045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.577094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.577245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.577293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.577465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.577511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.577670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.577706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.577866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.577901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.578015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.578050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.578230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.578277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.578387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.578424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.578715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.578755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.578863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.578896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.578998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.579032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.579155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.579193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.579316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.579352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.579493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.579548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.579738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.579773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.145 [2024-11-17 02:57:47.579876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.145 [2024-11-17 02:57:47.579910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.145 qpair failed and we were unable to recover it. 00:37:39.427 [2024-11-17 02:57:47.580014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.427 [2024-11-17 02:57:47.580048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.427 qpair failed and we were unable to recover it. 00:37:39.427 [2024-11-17 02:57:47.580170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.427 [2024-11-17 02:57:47.580203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.427 qpair failed and we were unable to recover it. 00:37:39.427 [2024-11-17 02:57:47.580352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.427 [2024-11-17 02:57:47.580399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.427 qpair failed and we were unable to recover it. 00:37:39.427 [2024-11-17 02:57:47.580509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.427 [2024-11-17 02:57:47.580545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.427 qpair failed and we were unable to recover it. 00:37:39.427 [2024-11-17 02:57:47.580736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.427 [2024-11-17 02:57:47.580775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.427 qpair failed and we were unable to recover it. 00:37:39.427 [2024-11-17 02:57:47.580888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.427 [2024-11-17 02:57:47.580925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.427 qpair failed and we were unable to recover it. 00:37:39.427 [2024-11-17 02:57:47.581070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.427 [2024-11-17 02:57:47.581114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.427 qpair failed and we were unable to recover it. 00:37:39.427 [2024-11-17 02:57:47.581333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.581367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.581509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.581543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.581728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.581765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.581884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.581936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.582091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.582148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.582290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.582324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.582525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.582583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.582732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.582792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.582957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.582992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.583141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.583176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.583365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.583433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.583596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.583649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.583768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.583804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.583936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.583969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.584109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.584143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.584254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.584287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.584413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.584453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.584601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.584638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.584754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.584791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.585038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.585072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.585195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.585228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.585360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.585403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.585532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.585583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.585737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.585780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.585940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.585993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.586161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.586196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.586297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.586332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.586482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.586518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.586658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.586695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.586808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.586846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.587029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.587062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.587220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.587254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.587355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.587407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.587590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.587623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.587726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.587759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.587894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.587932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.588085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.588129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.428 qpair failed and we were unable to recover it. 00:37:39.428 [2024-11-17 02:57:47.588283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.428 [2024-11-17 02:57:47.588318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.588496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.588533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.588655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.588692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.588861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.588899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.589037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.589070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.589213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.589247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.589411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.589461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.589656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.589710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.589853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.589908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.590028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.590063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.590208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.590242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.590340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.590373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.590528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.590565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.590726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.590763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.590912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.590949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.591194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.591231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.591339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.591381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.591536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.591589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.591712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.591751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.591989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.592023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.592184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.592236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.592419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.592469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.592615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.592667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.592833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.592867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.592969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.593004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.593149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.593183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.593291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.593348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.593586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.593644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.593751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.593788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.593973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.594011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.594154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.594191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.594391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.594426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.594564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.594598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.594779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.594832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.594992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.595026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.595257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.595313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.595471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.595509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.595656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.429 [2024-11-17 02:57:47.595693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.429 qpair failed and we were unable to recover it. 00:37:39.429 [2024-11-17 02:57:47.595886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.595923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.596108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.596142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.596264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.596300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.596438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.596486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.596754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.596810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.596958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.596995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.597193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.597229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.597390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.597443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.597615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.597649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.597803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.597866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.597998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.598033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.598202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.598237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.598374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.598418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.598650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.598712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.598858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.598908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.599030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.599067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.599208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.599258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.599453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.599495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.599690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.599726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.599907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.599943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.600113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.600148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.600285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.600318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.600448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.600517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.600676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.600730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.600871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.600906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.601040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.601075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.601221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.601256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.601400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.601453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.601587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.601627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.601807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.601856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.602003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.602040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.602204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.602254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.602463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.602538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.602785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.602821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.602962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.602996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.603215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.603252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.603410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.603478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.430 [2024-11-17 02:57:47.603700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.430 [2024-11-17 02:57:47.603760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.430 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.603976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.604034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.604234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.604269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.604442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.604480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.604719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.604757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.604993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.605031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.605225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.605260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.605408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.605446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.605673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.605750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.605892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.605925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.606113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.606162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.606289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.606322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.606485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.606520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.606702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.606771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.606928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.606964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.607087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.607147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.607255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.607289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.607453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.607489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.607634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.607676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.607823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.607860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.608036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.608074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.608212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.608246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.608374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.608416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.608560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.608599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.608711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.608748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.608916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.608971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.609161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.609209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.609371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.609429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.609601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.609638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.609849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.609910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.610110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.610147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.610284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.610319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.610519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.610591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.610736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.610774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.610939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.610990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.611154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.611189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.611319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.611353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.431 [2024-11-17 02:57:47.611493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.431 [2024-11-17 02:57:47.611527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.431 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.611720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.611756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.611954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.611990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.612119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.612173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.612307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.612341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.612502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.612535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.612720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.612756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.612890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.612927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.613054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.613109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.613276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.613309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.613426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.613460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.613648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.613685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.613857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.613893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.614010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.614047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.614180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.614214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.614369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.614409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.614564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.614598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.614783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.614821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.614981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.615015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.615184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.615234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.615481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.615530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.615689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.615751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.615944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.615997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.616134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.616170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.616272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.616308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.616468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.616507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.616722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.616761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.616927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.616981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.617164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.617201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.617344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.617385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.617544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.617580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.432 qpair failed and we were unable to recover it. 00:37:39.432 [2024-11-17 02:57:47.617793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.432 [2024-11-17 02:57:47.617827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.617970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.618020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.618221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.618270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.618417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.618456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.618764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.618832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.618986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.619025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.619198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.619234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.619375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.619420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.619542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.619580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.619847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.619916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.620074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.620125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.620247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.620282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.620462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.620511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.620704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.620769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.621023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.621083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.621261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.621296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.621441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.621476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.621647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.621682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.621808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.621843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.622012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.622046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.622203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.622239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.622346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.622385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.622620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.622660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.622869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.622927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.623045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.623087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.623260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.623310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.623433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.623483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.623719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.623760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.624014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.624073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.624226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.624261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.624394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.624433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.624565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.624617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.624859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.624897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.625047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.625086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.625276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.625311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.625456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.625494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.625657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.625692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.625872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.433 [2024-11-17 02:57:47.625926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.433 qpair failed and we were unable to recover it. 00:37:39.433 [2024-11-17 02:57:47.626076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.626137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.626285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.626323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.626479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.626516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.626725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.626779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.626945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.626980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.627107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.627162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.627345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.627400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.627638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.627701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.627901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.627939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.628057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.628122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.628273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.628326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.628476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.628515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.628778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.628835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.628983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.629021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.629212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.629270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.629399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.629434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.629571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.629606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.629859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.629916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.630129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.630165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.630381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.630415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.630521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.630555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.630695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.630728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.630863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.630897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.631093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.631166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.631306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.631358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.631515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.631565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.631721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.631758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.632032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.632115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.632320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.632384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.632650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.632704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.632965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.633006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.633166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.633203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.633333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.633373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.633506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.633559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.633782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.633840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.633984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.634038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.634185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.634221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.434 [2024-11-17 02:57:47.634356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.434 [2024-11-17 02:57:47.634407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.434 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.634581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.634618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.634799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.634836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.635011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.635057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.635233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.635268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.635402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.635440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.635577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.635614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.635816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.635855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.636036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.636089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.636221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.636255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.636382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.636444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.636619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.636656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.636787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.636826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.637002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.637058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.637191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.637229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.637400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.637458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.637646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.637699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.637894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.637932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.638062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.638112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.638280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.638320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.638493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.638532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.638670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.638703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.638825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.638865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.638980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.639014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.639154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.639188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.639332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.639370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.639516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.639554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.639667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.639704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.639846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.639907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.640111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.640148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.640335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.640387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.640544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.640598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.640747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.640802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.641012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.641062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.641231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.641268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.641444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.641484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.641653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.641707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.641826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.641871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.435 [2024-11-17 02:57:47.642069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.435 [2024-11-17 02:57:47.642119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.435 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.642340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.642375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.642506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.642540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.642674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.642709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.642814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.642849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.642969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.643007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.643171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.643213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.643351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.643385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.643519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.643552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.643691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.643725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.643857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.643892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.644005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.644046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.644206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.644241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.644341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.644375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.644503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.644538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.644653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.644688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.644840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.644891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.645028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.645063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.645238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.645272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.645430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.645468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.645594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.645631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.645767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.645813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.645955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.645989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.646124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.646159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.646304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.646339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.646476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.646530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.646705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.646746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.646870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.646909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.647022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.647072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.647183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.647217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.647391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.647457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.647618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.647673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.647810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.647871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.648006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.648042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.648169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.648212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.648351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.648387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.648535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.648570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.436 qpair failed and we were unable to recover it. 00:37:39.436 [2024-11-17 02:57:47.648712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.436 [2024-11-17 02:57:47.648753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.648904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.648938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.649103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.649146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.649301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.649338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.649479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.649516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.649651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.649688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.649858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.649913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.650056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.650091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.650226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.650266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.650477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.650530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.650687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.650746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.650929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.650965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.651115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.651154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.651283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.651316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.651474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.651513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.651656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.651694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.651837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.651888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.652006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.652042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.652202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.652240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.652356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.652403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.652621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.652666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.652779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.652814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.652926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.652966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.653116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.653153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.653261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.653296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.653460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.653494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.653628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.653663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.653811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.653846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.653989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.654023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.654188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.654235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.654348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.654383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.654487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.654529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.654706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.654741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.654843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.654877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.437 [2024-11-17 02:57:47.655044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.437 [2024-11-17 02:57:47.655092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.437 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.655224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.655263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.655427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.655463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.655574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.655611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.655750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.655786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.655949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.655991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.656145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.656187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.656307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.656342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.656513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.656550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.656744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.656799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.656940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.656975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.657084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.657132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.657271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.657305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.657453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.657503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.657659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.657709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.657892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.657928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.658065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.658108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.658229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.658263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.658373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.658407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.658540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.658575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.658681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.658714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.658858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.658893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.659056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.659088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.659251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.659289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.659462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.659496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.659630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.659668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.659826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.659864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.660024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.660058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.660212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.660247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.660407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.660444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.660592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.660629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.660813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.660850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.661039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.661072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.661257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.661291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.661390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.661423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.661524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.661564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.661766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.661804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.661985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.662023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.438 qpair failed and we were unable to recover it. 00:37:39.438 [2024-11-17 02:57:47.662186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.438 [2024-11-17 02:57:47.662222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.662359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.662398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.662532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.662584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.662754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.662791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.662924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.662972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.663152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.663186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.663325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.663359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.663488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.663530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.663657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.663699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.663874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.663911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.664087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.664149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.664317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.664351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.664523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.664558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.664666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.664699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.664834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.664872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.665015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.665049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.665202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.665243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.665388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.665445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.665623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.665678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.665842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.665884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.666023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.666076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.666249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.666283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.666454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.666489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.666601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.666637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.666773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.666806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.666943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.666976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.667126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.667162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.667322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.667356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.667472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.667530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.667766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.667807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.667933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.667969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.668110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.668144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.668282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.668318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.668472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.668510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.668637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.668686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.668823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.668859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.669005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.669060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.669223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.669261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.439 qpair failed and we were unable to recover it. 00:37:39.439 [2024-11-17 02:57:47.669380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.439 [2024-11-17 02:57:47.669423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.669591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.669624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.669730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.669770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.669895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.669936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.670075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.670133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.670281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.670320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.670456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.670511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.670660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.670698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.670852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.670898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.671049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.671089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.671225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.671265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.671405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.671457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.671601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.671646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.671847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.671885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.672034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.672072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.672214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.672249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.672409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.672443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.672571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.672605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.672754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.672792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.672921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.672972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.673102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.673139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.673257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.673291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.673494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.673549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.673688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.673722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.673886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.673926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.674074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.674115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.674261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.674296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.674402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.674449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.674605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.674642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.674788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.674827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.675009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.675062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.675243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.675291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.675442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.675495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.675713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.675769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.675922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.675961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.676117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.676174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.676280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.676314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.676467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.676526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.440 qpair failed and we were unable to recover it. 00:37:39.440 [2024-11-17 02:57:47.676711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.440 [2024-11-17 02:57:47.676748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.676919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.676962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.677080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.677132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.677268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.677301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.677426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.677461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.677600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.677640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.677770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.677808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.677950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.677988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.678166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.678201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.678317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.678352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.678516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.678550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.678647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.678681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.678824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.678868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.679015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.679067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.679185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.679237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.679411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.679460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.679599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.679655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.679818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.679874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.680069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.680116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.680247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.680284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.680426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.680462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.680618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.680658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.680831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.680868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.681017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.681056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.681231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.681272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.681467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.681519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.681636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.681672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.681823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.681875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.682063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.682139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.682272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.682320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.682460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.682496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.682658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.682693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.682823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.682857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.682999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.441 [2024-11-17 02:57:47.683035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.441 qpair failed and we were unable to recover it. 00:37:39.441 [2024-11-17 02:57:47.683175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.683209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.683343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.683378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.683524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.683558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.683697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.683731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.683888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.683951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.684180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.684218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.684359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.684422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.684553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.684591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.684823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.684896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.685070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.685115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.685258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.685294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.685477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.685516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.685660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.685694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.685889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.685927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.686113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.686147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.686257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.686292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.686473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.686509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.686639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.686697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.686859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.686902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.687057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.687117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.687297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.687347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.687513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.687571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.687717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.687753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.687860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.687894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.688030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.688065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.688219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.688267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.688449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.688485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.688614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.688650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.688790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.688824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.688965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.689001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.689122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.689157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.689293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.689327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.689446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.689481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.689620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.689654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.689762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.689799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.689944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.689982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.690157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.690192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.690330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.442 [2024-11-17 02:57:47.690364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.442 qpair failed and we were unable to recover it. 00:37:39.442 [2024-11-17 02:57:47.690494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.690534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.690680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.690718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.690910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.690960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.691138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.691186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.691357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.691420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.691597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.691652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.691801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.691839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.692001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.692050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.692202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.692251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.692445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.692507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.692692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.692757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.692990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.693041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.693218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.693256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.693375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.693439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.693568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.693606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.693835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.693900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.694053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.694092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.694286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.694319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.694519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.694583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.694757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.694816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.694965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.695008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.695160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.695196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.695365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.695420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.695542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.695583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.695764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.695820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.695980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.696020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.696209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.696245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.696360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.696420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.696591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.696636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.696792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.696829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.696994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.697037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.697187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.697223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.697350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.697403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.697511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.697549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.697747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.697812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.697966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.698007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.698194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.698231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.443 qpair failed and we were unable to recover it. 00:37:39.443 [2024-11-17 02:57:47.698364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.443 [2024-11-17 02:57:47.698399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.698496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.698530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.698692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.698731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.698903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.698941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.699076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.699123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.699315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.699349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.699509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.699547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.699712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.699748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.699919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.699960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.700113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.700172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.700336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.700423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.700625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.700664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.700863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.700903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.701046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.701083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.701278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.701318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.701432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.701466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.701562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.701596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.701788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.701856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.702010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.702069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.702229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.702266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.702396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.702448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.702645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.702684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.702844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.702881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.703037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.703080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.703272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.703306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.703467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.703502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.703662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.703696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.703868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.703906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.704042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.704112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.704272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.704308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.704468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.704507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.704652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.704689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.704809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.704847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.704992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.705031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.705217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.705272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.705480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.705538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.705683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.705722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.705874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.705916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.444 qpair failed and we were unable to recover it. 00:37:39.444 [2024-11-17 02:57:47.706111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.444 [2024-11-17 02:57:47.706173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.706304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.706353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.706511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.706546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.706663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.706710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.706820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.706854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.706966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.707000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.707159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.707195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.707328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.707361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.707496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.707536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.707713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.707752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.707905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.707938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.708132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.708204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.708388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.708445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.708591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.708629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.708768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.708804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.708942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.708982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.709214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.709250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.709360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.709394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.709562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.709601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.709759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.709794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.709936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.709973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.710113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.710149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.710291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.710326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.710458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.710492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.710619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.710654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.710818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.710852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.710997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.711033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.711176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.711211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.711390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.711424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.711554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.711616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.711773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.711843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.711968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.712001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.712111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.712145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.712248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.712283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.712422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.712456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.712609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.712653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.712833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.712866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.445 [2024-11-17 02:57:47.712989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.445 [2024-11-17 02:57:47.713022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.445 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.713189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.713225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.713337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.713370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.713479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.713517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.713628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.713664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.713821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.713858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.714020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.714059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.714189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.714230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.714343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.714377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.714521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.714559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.714667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.714702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.714832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.714876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.715020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.715057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.715235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.715285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.715399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.715436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.715550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.715593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.715733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.715769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.715933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.715972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.716139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.716175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.716320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.716355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.716475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.716508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.716645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.716684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.716859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.716894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.716994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.717027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.717188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.717224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.717335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.717372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.717494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.717542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.717696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.717734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.717909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.717964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.718162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.718198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.718364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.718399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.718508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.718542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.718674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.718707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.718871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.718907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.446 [2024-11-17 02:57:47.719059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.446 [2024-11-17 02:57:47.719116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.446 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.719271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.719313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.719475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.719510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.719643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.719676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.719811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.719875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.720054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.720112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.720281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.720317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.720449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.720483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.720597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.720630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.720727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.720762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.720870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.720905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.721092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.721149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.721328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.721365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.721503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.721555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.721726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.721766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.721912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.721951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.722120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.722155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.722261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.722295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.722460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.722495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.722600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.722634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.722813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.722852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.723054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.723109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.723239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.723274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.723425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.723462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.723631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.723670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.723802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.723841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.724023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.724072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.724211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.724259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.724447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.724503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.724677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.724716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.724861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.724899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.725068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.725119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.725305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.725340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.725487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.725524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.725725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.725764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.725915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.725955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.726139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.726181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.726345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.726379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.447 qpair failed and we were unable to recover it. 00:37:39.447 [2024-11-17 02:57:47.726498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.447 [2024-11-17 02:57:47.726532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.726674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.726708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.726873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.726911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.727064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.727109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.727291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.727325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.727494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.727561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.727728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.727783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.727966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.728013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.728195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.728234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.728358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.728412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.728548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.728599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.728711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.728749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.728934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.728973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.729121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.729156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.729319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.729354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.729505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.729538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.729793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.729837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.729977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.730014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.730132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.730184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.730303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.730338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.730476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.730510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.730619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.730652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.730786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.730825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.730980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.731026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.731177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.731213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.731364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.731402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.731551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.731589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.731761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.731799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.731958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.731997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.732156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.732196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.732320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.732355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.732484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.732522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.732645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.732691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.732875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.732914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.733135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.733186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.733332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.733400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.733620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.733678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.733864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.448 [2024-11-17 02:57:47.733903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.448 qpair failed and we were unable to recover it. 00:37:39.448 [2024-11-17 02:57:47.734064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.734105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.734211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.734245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.734400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.734449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.734575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.734623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.734769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.734807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.734969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.735004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.735153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.735189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.735340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.735377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.735508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.735558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.735710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.735748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.735926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.735974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.736161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.736212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.736362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.736421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.736574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.736643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.736768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.736804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.736927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.736963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.737073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.737117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.737250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.737308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.737437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.737472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.737617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.737652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.737791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.737828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.737995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.738034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.738214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.738252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.738405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.738445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.738632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.738699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.738850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.738894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.739035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.739077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.739237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.739273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.739465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.739504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.739656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.739693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.739841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.739875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.740065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.740111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.740275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.740312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.740472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.740510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.740662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.740700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.740853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.740891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.741054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.449 [2024-11-17 02:57:47.741106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.449 qpair failed and we were unable to recover it. 00:37:39.449 [2024-11-17 02:57:47.741246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.741281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.741386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.741421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.741661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.741727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.741889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.741947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.742135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.742174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.742319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.742354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.742506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.742562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.742709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.742764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.742872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.742908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.743050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.743085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.743261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.743300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.743476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.743514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.743653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.743692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.743820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.743858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.744050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.744084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.744236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.744270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.744447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.744503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.744698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.744737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.744882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.744920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.745054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.745108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.745270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.745304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.745469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.745509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.745662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.745701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.745905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.745943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.746083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.746130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.746313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.746347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.746500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.746538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.746712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.746752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.746901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.746944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.747122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.747157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.747315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.747351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.747459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.747494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.747653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.450 [2024-11-17 02:57:47.747691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.450 qpair failed and we were unable to recover it. 00:37:39.450 [2024-11-17 02:57:47.747824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.747859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.748049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.748112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.748310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.748347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.748515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.748551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.748709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.748746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.748945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.748991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.749172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.749207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.749363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.749409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.749541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.749574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.749767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.749813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.750004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.750045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.750187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.750222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.750353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.750387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.750560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.750594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.750748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.750787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.750935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.750973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.751174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.751213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.751360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.751394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.751503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.751540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.751677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.751720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.751864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.751902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.752005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.752041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.752207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.752243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.752433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.752471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.752593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.752630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.752766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.752803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.752916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.752955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.753115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.753168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.753305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.753341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.753469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.753507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.753635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.753673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.753824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.753862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.754024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.754058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.754213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.754247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.754383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.754419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.754550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.754590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.754729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.754764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.754933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.451 [2024-11-17 02:57:47.754968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.451 qpair failed and we were unable to recover it. 00:37:39.451 [2024-11-17 02:57:47.755120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.755156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.755282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.755315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.755500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.755536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.755694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.755728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.755862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.755896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.756059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.756102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.756278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.756314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.756443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.756489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.756595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.756630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.756767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.756804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.756945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.756979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.757136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.757176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.757350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.757385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.757503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.757536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.757682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.757720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.757840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.757874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.758045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.758111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.758255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.758289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.758448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.758481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.758624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.758662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.758852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.758893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.759050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.759085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.759247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.759282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.759394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.759429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.759550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.759585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.759693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.759726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.759858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.759897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.760085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.760128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.760265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.760298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.760402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.760454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.760632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.760666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.760774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.760808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.760946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.760984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.761091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.761134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.761248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.761282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.761403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.761439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.761606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.761640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.761734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.452 [2024-11-17 02:57:47.761771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.452 qpair failed and we were unable to recover it. 00:37:39.452 [2024-11-17 02:57:47.761956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.761996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.762148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.762183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.762331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.762369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.762534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.762567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.762704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.762738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.762880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.762913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.763049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.763084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.763204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.763238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.763352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.763385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.763549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.763588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.763735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.763769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.763933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.763966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.764122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.764171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.764299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.764336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.764475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.764510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.764644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.764678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.764837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.764872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.765003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.765037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.765181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.765218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.765343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.765377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.765497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.765531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.765702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.765736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.765896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.765930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.766077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.766132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.766292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.766327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.766481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.766515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.766618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.766651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.766821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.766858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.766987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.767024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.767134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.767169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.767277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.767329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.767515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.767550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.767713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.767747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.767934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.767970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.768118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.768177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.768319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.768353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.768452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.768484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.453 qpair failed and we were unable to recover it. 00:37:39.453 [2024-11-17 02:57:47.768619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.453 [2024-11-17 02:57:47.768655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.768812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.768862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.769038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.769087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.769269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.769304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.769429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.769465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.769574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.769608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.769740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.769774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.769930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.769963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.770160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.770200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.770367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.770410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.770569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.770606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.770728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.770762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.770878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.770916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.771047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.771081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.771210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.771246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.771385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.771419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.771582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.771616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.771769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.771808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.771970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.772006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.772143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.772177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.772312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.772348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.772476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.772510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.772649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.772689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.772831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.772864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.772995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.773037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.773215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.773253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.773413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.773446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.773565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.773600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.773757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.773791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.773906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.773964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.774128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.774162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.774273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.774307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.774482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.774524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.774679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.774716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.774901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.774938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.775112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.775152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.775297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.775334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.454 [2024-11-17 02:57:47.775483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.454 [2024-11-17 02:57:47.775521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.454 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.775704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.775738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.775848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.775882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.776055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.776115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.776253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.776286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.776408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.776455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.776597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.776631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.776746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.776780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.776881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.776923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.777042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.777086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.777234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.777268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.777441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.777476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.777614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.777665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.777826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.777865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.778026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.778060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.778319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.778355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.778529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.778566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.778703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.778741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.778856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.778890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.779033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.779067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.779187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.779222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.779390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.779424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.779523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.779564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.779736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.779770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.779927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.779965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.780145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.780194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.780345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.780384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.780564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.780602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.780739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.780794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.780971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.781006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.781171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.781217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.781377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.781433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.781564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:37:39.455 [2024-11-17 02:57:47.781778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.781828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.781994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.455 [2024-11-17 02:57:47.782030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.455 qpair failed and we were unable to recover it. 00:37:39.455 [2024-11-17 02:57:47.782218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.782253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.782435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.782473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.782596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.782632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.782746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.782784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.782954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.782991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.783163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.783200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.783320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.783358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.783506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.783558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.783792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.783862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.784001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.784038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.784155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.784193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.784311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.784350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.784514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.784553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.784705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.784757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.784960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.785011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.785169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.785204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.785353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.785418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.785605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.785646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.785792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.785851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.786040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.786075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.786250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.786284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.786429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.786478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.786660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.786721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.786916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.786973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.787110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.787151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.787338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.787391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.787575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.787628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.787847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.787881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.788016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.788051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.788205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.788259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.788426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.788466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.788620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.788658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.788850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.788900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.789029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.789063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.789217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.789252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.789406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.789443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.789628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.789670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.789783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.456 [2024-11-17 02:57:47.789822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.456 qpair failed and we were unable to recover it. 00:37:39.456 [2024-11-17 02:57:47.790002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.790039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.790214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.790249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.790426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.790481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.790640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.790682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.790871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.790933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.791119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.791155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.791261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.791296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.791433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.791468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.791631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.791670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.791810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.791862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.792022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.792063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.792270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.792319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.792460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.792517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.792707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.792764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.792909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.792944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.793115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.793151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.793332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.793387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.793507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.793543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.793683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.793723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.793913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.793948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.794110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.794146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.794299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.794338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.794532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.794586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.794773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.794833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.795003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.795040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.795169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.795205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.795344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.795402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.795570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.795630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.795850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.795908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.796107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.796174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.796321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.796358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.796517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.796591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.796789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.796827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.796983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.797020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.797187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.797237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.797410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.797447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.797606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.797659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.797815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.457 [2024-11-17 02:57:47.797867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.457 qpair failed and we were unable to recover it. 00:37:39.457 [2024-11-17 02:57:47.797986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.798035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.798172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.798223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.798395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.798434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.798639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.798677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.798887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.798924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.799109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.799160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.799296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.799329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.799483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.799542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.799723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.799761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.799881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.799919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.800066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.800110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.800229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.800262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.800419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.800456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.800585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.800634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.800809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.800846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.801010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.801048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.801183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.801232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.801362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.801422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.801620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.801681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.801905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.801966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.802114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.802168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.802290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.802339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.802536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.802598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.802826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.802883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.802989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.803027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.803170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.803207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.803433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.803468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.803647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.803700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.803847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.803910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.804043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.804079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.804220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.804270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.804409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.804446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.804608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.804647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.804839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.804884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.805061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.805113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.805245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.805279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.805396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.805430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.805587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.458 [2024-11-17 02:57:47.805620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.458 qpair failed and we were unable to recover it. 00:37:39.458 [2024-11-17 02:57:47.805745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.805778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.805958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.806023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.806183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.806238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.806416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.806455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.806691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.806730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.806907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.806945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.807112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.807167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.807322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.807371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.807721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.807780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.807961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.807999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.808114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.808168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.808271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.808305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.808446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.808479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.808631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.808678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.808800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.808852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.809003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.809040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.809206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.809241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.809350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.809403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.809553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.809591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.809706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.809742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.809890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.809928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.810077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.810121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.810272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.810304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.810441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.810474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.810583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.810634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.810781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.810817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.811021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.811071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.811261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.811310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.811537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.811575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.811826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.811866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.812015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.812059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.812254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.812304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.812531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.812601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.459 qpair failed and we were unable to recover it. 00:37:39.459 [2024-11-17 02:57:47.812780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.459 [2024-11-17 02:57:47.812839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.812968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.813003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.813172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.813209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.813365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.813405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.813620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.813694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.813905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.813943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.814109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.814145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.814259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.814295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.814435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.814489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.814724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.814764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.814920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.814954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.815123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.815158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.815336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.815384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.815559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.815623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.815828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.815885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.816030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.816067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.816219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.816273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.816491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.816543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.816700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.816756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.816900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.816934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.817069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.817113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.817228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.817264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.817450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.817485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.817620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.817655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.817828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.817863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.817986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.818035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.818235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.818289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.818488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.818544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.818682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.818743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.818884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.818919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.819085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.819127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.819254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.819308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.819450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.819485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.819604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.819640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.819770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.819819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.460 [2024-11-17 02:57:47.819964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.460 [2024-11-17 02:57:47.820002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.460 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.820144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.820181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.820283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.820324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.820496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.820530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.820713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.820751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.820907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.820944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.821113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.821148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.821293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.821348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.821616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.821659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.821834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.821895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.822091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.822136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.822253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.822287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.822432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.822466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.822664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.822737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.822919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.822957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.823122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.823176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.823325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.823360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.823492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.823526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.823710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.823749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.823975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.824012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.824201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.824251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.824387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.824436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.824593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.824645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.824921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.824960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.825116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.825168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.825326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.825362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.825604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.825662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.825891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.825951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.826115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.826149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.826308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.826357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.826530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.826571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.826855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.826922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.827121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.827174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.827320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.827355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.827490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.827557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.827677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.827714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.827876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.827929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.828043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.828089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.461 [2024-11-17 02:57:47.828248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.461 [2024-11-17 02:57:47.828297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.461 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.828473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.828527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.828755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.828817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.828978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.829018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.829225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.829266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.829420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.829458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.829663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.829701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.829903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.829946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.830067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.830123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.830296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.830344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.830487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.830527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.830653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.830718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.830870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.830930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.831065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.831116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.831231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.831266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.831375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.831415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.831580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.831615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.831828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.831863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.832089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.832130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.832238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.832273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.832409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.832443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.832572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.832607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.832781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.832819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.832956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.832991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.833160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.833209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.833345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.833386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.833518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.833556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.833715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.833774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.833964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.834022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.834167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.834202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.834358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.834398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.834603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.834642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.834780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.834818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.834965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.834998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.835166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.835200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.835352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.835401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.835596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.835652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.835871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.835926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.462 qpair failed and we were unable to recover it. 00:37:39.462 [2024-11-17 02:57:47.836065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.462 [2024-11-17 02:57:47.836107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.836245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.836307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.836465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.836503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.836618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.836667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.836848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.836885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.837002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.837040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.837216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.837270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.837435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.837485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.837635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.837673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.837820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.837858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.838017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.838056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.838251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.838300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.838427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.838467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.838641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.838680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.838856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.838893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.839016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.839053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.839219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.839254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.839363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.839396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.839513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.839550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.839676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.839710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.839887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.839928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.840068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.840131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.840270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.840309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.840460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.840499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.840648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.840686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.840880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.840947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.841064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.841109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.841219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.841272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.841455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.841493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.841608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.841647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.841811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.841869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.842058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.842110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.842233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.842277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.842451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.842500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.842629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.842667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.842835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.842903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.843009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.843044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.843217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.463 [2024-11-17 02:57:47.843266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.463 qpair failed and we were unable to recover it. 00:37:39.463 [2024-11-17 02:57:47.843532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.843596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.843771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.843832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.844010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.844044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.844197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.844236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.844408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.844462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.844649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.844689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.844904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.844961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.845122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.845174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.845332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.845386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.845597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.845652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.845767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.845802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.845939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.845974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.846173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.846228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.846374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.846415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.846638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.846697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.846865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.846916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.847044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.847078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.847193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.847230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.847377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.847415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.847572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.847611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.847783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.847840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.848001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.848038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.848197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.848247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.848390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.848427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.848568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.848604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.848854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.848911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.849035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.849087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.849264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.849299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.849461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.849515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.849685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.849748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.849873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.849907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.850079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.850128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.850277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.850330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.850467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.464 [2024-11-17 02:57:47.850521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.464 qpair failed and we were unable to recover it. 00:37:39.464 [2024-11-17 02:57:47.850711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.850768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.850994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.851052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.851218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.851254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.851377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.851415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.851569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.851606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.851739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.851792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.851957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.851992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.852185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.852234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.852412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.852449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.852589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.852647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.852818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.852878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.853036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.853070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.853273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.853323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.853511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.853573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.853792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.853854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.853983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.854023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.854167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.854203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.854331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.854379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.854559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.854600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.854785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.854846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.855010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.855044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.855201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.855249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.855383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.855431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.855571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.855609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.855796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.855855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.856039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.856078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.856276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.856313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.856444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.856502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.856754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.856809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.856950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.856984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.857100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.857137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.857290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.857338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.857478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.857533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.857710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.857766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.857958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.857991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.858148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.858183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.858316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.858351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.465 [2024-11-17 02:57:47.858509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.465 [2024-11-17 02:57:47.858564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.465 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.858716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.858770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.858919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.858969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.859144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.859180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.859325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.859375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.859514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.859568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.859749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.859801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.859938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.859973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.860105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.860140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.860297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.860352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.860506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.860546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.860685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.860758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.860902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.860959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.861117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.861168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.861273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.861307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.861475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.861513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.861662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.861698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.861850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.861894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.862022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.862058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.862203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.862238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.862341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.862376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.862567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.862620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.862789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.862849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.863015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.863050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.863187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.863227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.863376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.863441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.863655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.863700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.864012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.864064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.864239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.864294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.864432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.864519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.864751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.864789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.864944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.864981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.865150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.865184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.865341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.865374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.865586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.865664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.865839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.865905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.866067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.866113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.866235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.466 [2024-11-17 02:57:47.866270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.466 qpair failed and we were unable to recover it. 00:37:39.466 [2024-11-17 02:57:47.866410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.467 [2024-11-17 02:57:47.866446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.467 qpair failed and we were unable to recover it. 00:37:39.467 [2024-11-17 02:57:47.866563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.467 [2024-11-17 02:57:47.866601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.467 qpair failed and we were unable to recover it. 00:37:39.467 [2024-11-17 02:57:47.866753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.467 [2024-11-17 02:57:47.866792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.467 qpair failed and we were unable to recover it. 00:37:39.467 [2024-11-17 02:57:47.866980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.467 [2024-11-17 02:57:47.867035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.467 qpair failed and we were unable to recover it. 00:37:39.467 [2024-11-17 02:57:47.867226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.467 [2024-11-17 02:57:47.867275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.467 qpair failed and we were unable to recover it. 00:37:39.467 [2024-11-17 02:57:47.867433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.467 [2024-11-17 02:57:47.867471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.467 qpair failed and we were unable to recover it. 00:37:39.467 [2024-11-17 02:57:47.867621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.467 [2024-11-17 02:57:47.867660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.467 qpair failed and we were unable to recover it. 00:37:39.467 [2024-11-17 02:57:47.867930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.467 [2024-11-17 02:57:47.867989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.467 qpair failed and we were unable to recover it. 00:37:39.743 [2024-11-17 02:57:47.868128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.743 [2024-11-17 02:57:47.868161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.743 qpair failed and we were unable to recover it. 00:37:39.743 [2024-11-17 02:57:47.868277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.743 [2024-11-17 02:57:47.868311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.743 qpair failed and we were unable to recover it. 00:37:39.743 [2024-11-17 02:57:47.868471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.743 [2024-11-17 02:57:47.868515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.743 qpair failed and we were unable to recover it. 00:37:39.743 [2024-11-17 02:57:47.868733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.743 [2024-11-17 02:57:47.868774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.743 qpair failed and we were unable to recover it. 00:37:39.743 [2024-11-17 02:57:47.868890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.743 [2024-11-17 02:57:47.868929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.743 qpair failed and we were unable to recover it. 00:37:39.743 [2024-11-17 02:57:47.869091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.743 [2024-11-17 02:57:47.869151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.743 qpair failed and we were unable to recover it. 00:37:39.743 [2024-11-17 02:57:47.869289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.743 [2024-11-17 02:57:47.869324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.743 qpair failed and we were unable to recover it. 00:37:39.743 [2024-11-17 02:57:47.869488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.743 [2024-11-17 02:57:47.869536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.743 qpair failed and we were unable to recover it. 00:37:39.743 [2024-11-17 02:57:47.869685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.743 [2024-11-17 02:57:47.869724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.743 qpair failed and we were unable to recover it. 00:37:39.743 [2024-11-17 02:57:47.869843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.743 [2024-11-17 02:57:47.869881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.743 qpair failed and we were unable to recover it. 00:37:39.743 [2024-11-17 02:57:47.870032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.743 [2024-11-17 02:57:47.870066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.743 qpair failed and we were unable to recover it. 00:37:39.743 [2024-11-17 02:57:47.870216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.743 [2024-11-17 02:57:47.870251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.743 qpair failed and we were unable to recover it. 00:37:39.743 [2024-11-17 02:57:47.870394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.743 [2024-11-17 02:57:47.870428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.743 qpair failed and we were unable to recover it. 00:37:39.743 [2024-11-17 02:57:47.870604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.743 [2024-11-17 02:57:47.870641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.743 qpair failed and we were unable to recover it. 00:37:39.743 [2024-11-17 02:57:47.870794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.743 [2024-11-17 02:57:47.870832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.743 qpair failed and we were unable to recover it. 00:37:39.743 [2024-11-17 02:57:47.870949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.743 [2024-11-17 02:57:47.870988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.743 qpair failed and we were unable to recover it. 00:37:39.743 [2024-11-17 02:57:47.871117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.743 [2024-11-17 02:57:47.871152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.743 qpair failed and we were unable to recover it. 00:37:39.743 [2024-11-17 02:57:47.871287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.743 [2024-11-17 02:57:47.871321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.743 qpair failed and we were unable to recover it. 00:37:39.743 [2024-11-17 02:57:47.871435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.871469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.871658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.871696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.871843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.871885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.872050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.872090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.872256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.872296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.872484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.872523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.872669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.872706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.872858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.872896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.873044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.873089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.873259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.873292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.873460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.873511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.873657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.873696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.873824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.873862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.873975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.874025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.874152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.874187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.874287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.874321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.874518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.874555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.874784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.874821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.874942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.874980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.875080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.875126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.875271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.875310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.875461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.875498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.875670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.875707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.875812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.875849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.876017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.876072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.876249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.876298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.876506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.876554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.876767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.876830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.877006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.877044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.877194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.877228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.877386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.877432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.877696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.877753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.877872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.877909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.878064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.878104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.744 [2024-11-17 02:57:47.878208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.744 [2024-11-17 02:57:47.878243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.744 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.878426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.878494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.878758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.878820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.878991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.879030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.879198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.879244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.879353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.879407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.879578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.879616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.879818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.879879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.880037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.880072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.880231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.880280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.880446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.880487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.880683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.880781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.880920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.880959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.881088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.881151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.881322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.881370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.881605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.881642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.881872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.881908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.882055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.882090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.882212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.882247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.882353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.882393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.882524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.882560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.882679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.882715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.882850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.882885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.883021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.883056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.883199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.883247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.883386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.883423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.883575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.883640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.883802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.883843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.883986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.884025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.884189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.884225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.884408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.884446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.884706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.884748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.745 [2024-11-17 02:57:47.884984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.745 [2024-11-17 02:57:47.885024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.745 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.885195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.885231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.885368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.885427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.885674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.885736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.885924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.885988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.886138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.886192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.886333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.886368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.886495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.886545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.886761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.886827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.886970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.887009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.887165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.887202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.887359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.887393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.887610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.887692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.887851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.887951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.888116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.888153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.888335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.888387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.888648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.888706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.888957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.889037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.889232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.889274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.889430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.889496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.889779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.889841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.889980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.890013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.890142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.890177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.890316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.890351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.890633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.890687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.890950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.891012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.891190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.891225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.891387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.891423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.891637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.891706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.891847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.891886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.892022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.892072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.892223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.892284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.892445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.892486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.892606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.892658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.892815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.892859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.746 [2024-11-17 02:57:47.892998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.746 [2024-11-17 02:57:47.893051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.746 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.893222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.893259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.893407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.893474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.893748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.893808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.893925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.893959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.894103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.894159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.894306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.894346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.894498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.894537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.894676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.894714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.894844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.894884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.895092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.895169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.895378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.895432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.895555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.895593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.895825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.895883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.895996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.896032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.896186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.896240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.896359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.896399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.896543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.896597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.896732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.896773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.896928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.896963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.897087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.897145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.897335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.897400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.897667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.897728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.897850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.897889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.898037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.898088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.898259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.898313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.898521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.898575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.898821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.898862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.898992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.899033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.899167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.899202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.899336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.899372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.899538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.899577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.747 [2024-11-17 02:57:47.899747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.747 [2024-11-17 02:57:47.899786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.747 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.899953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.900007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.900190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.900241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.900387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.900436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.900697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.900767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.900920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.900959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.901152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.901187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.901332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.901372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.901584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.901638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.901902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.901955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.902123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.902159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.902324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.902375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.902577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.902645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.902840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.902903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.903063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.903109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.903244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.903279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.903424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.903458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.903606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.903644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.903835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.903889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.904059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.904104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.904245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.904280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.904478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.904515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.904716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.904755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.904926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.904964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.905109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.905165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.905289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.905338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.905552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.905589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.905714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.748 [2024-11-17 02:57:47.905773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.748 qpair failed and we were unable to recover it. 00:37:39.748 [2024-11-17 02:57:47.905945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.905981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.906123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.906158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.906283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.906332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.906499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.906535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.906653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.906687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.906829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.906863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.907014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.907049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.907187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.907238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.907428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.907467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.907606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.907654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.907782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.907818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.907968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.908023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.908208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.908258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.908391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.908431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.908601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.908657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.908815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.908870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.908991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.909028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.909216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.909252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.909370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.909416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.909554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.909596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.909774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.909812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.909997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.910033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.910160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.910196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.910372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.910432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.910698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.910755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.910927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.910965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.911133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.911168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.911270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.911303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.911457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.911495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.911743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.911802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.911973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.912011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.912139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.912192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.912348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.749 [2024-11-17 02:57:47.912397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.749 qpair failed and we were unable to recover it. 00:37:39.749 [2024-11-17 02:57:47.912574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.912628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.912786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.912843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.912958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.912993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.913154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.913204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.913387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.913435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.913578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.913614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.913744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.913779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.913915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.913949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.914074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.914123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.914236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.914273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.914403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.914452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.914638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.914677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.914782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.914837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.915012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.915046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.915166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.915200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.915317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.915352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.915489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.915540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.915657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.915694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.915864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.915901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.916052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.916086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.916256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.916290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.916450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.916491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.916669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.916708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.916858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.916896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.917037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.917072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.917280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.917315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.917462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.917505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.917648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.750 [2024-11-17 02:57:47.917686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.750 qpair failed and we were unable to recover it. 00:37:39.750 [2024-11-17 02:57:47.917893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.917932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.918087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.918149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.918246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.918281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.918459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.918508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.918677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.918730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.918893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.918946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.919119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.919155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.919283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.919338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.919513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.919594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.919754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.919806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.919940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.919975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.920123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.920194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.920369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.920409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.920541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.920576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.920748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.920786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.920932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.920971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.921153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.921204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.921352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.921391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.921566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.921601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.921708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.921744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.921897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.921951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.922127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.922176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.922362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.922415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.922682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.922745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.922952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.923012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.923175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.923211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.923403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.923458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.923607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.923648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.923865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.923930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.924113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.924149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.924307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.924346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.924582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.924620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.751 [2024-11-17 02:57:47.924831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.751 [2024-11-17 02:57:47.924887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.751 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.925036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.925070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.925225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.925274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.925445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.925500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.925832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.925891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.926055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.926116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.926275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.926315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.926463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.926531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.926739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.926796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.926983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.927022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.927155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.927201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.927315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.927351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.927495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.927547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.927667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.927707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.927865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.927904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.928063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.928105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.928216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.928251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.928410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.928464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.928689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.928729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.928869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.928920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.929068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.929113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.929268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.929308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.929492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.929530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.929805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.929868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.930092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.930164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.930300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.930334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.930520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.930573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.930894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.930951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.931108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.931162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.931321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.931355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.931501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.931553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.931783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.931821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.931987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.932023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.932216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.752 [2024-11-17 02:57:47.932266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.752 qpair failed and we were unable to recover it. 00:37:39.752 [2024-11-17 02:57:47.932478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.932554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.932797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.932857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.933021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.933056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.933224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.933260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.933404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.933454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.933583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.933625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.933776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.933814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.933977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.934032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.934182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.934217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.934357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.934390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.934555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.934589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.934734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.934773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.934951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.934996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.935131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.935186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.935310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.935345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.935482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.935514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.935670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.935708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.935865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.935899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.936074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.936115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.936246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.936279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.936474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.936508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.936620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.936657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.936842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.936884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.937071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.937133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.937265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.937300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.937416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.937450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.937604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.937688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.937870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.937909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.938018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.938057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.938257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.938307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.938451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.938488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.753 qpair failed and we were unable to recover it. 00:37:39.753 [2024-11-17 02:57:47.938696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.753 [2024-11-17 02:57:47.938737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.938963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.939002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.939183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.939232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.939357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.939395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.939552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.939604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.939861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.939918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.940079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.940121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.940316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.940366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.940564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.940627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.940844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.940880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.941028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.941064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.941238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.941274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.941399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.941469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.941700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.941756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.941863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.941899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.942040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.942085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.942244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.942296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.942484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.942540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.942768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.942834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.943061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.943101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.943325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.943378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.943536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.943593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.943839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.943899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.944066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.944107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.944267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.944317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.754 [2024-11-17 02:57:47.944488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-11-17 02:57:47.944542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.754 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.944819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.944878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.945041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.945076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.945214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.945249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.945427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.945465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.945641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.945679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.945797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.945836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.945999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.946052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.946267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.946316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.946469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.946524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.946642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.946678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.946896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.946950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.947089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.947131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.947296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.947331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.947509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.947563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.947802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.947844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.947967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.948002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.948144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.948180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.948306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.948355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.948479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.948528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.948699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.948758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.948907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.948945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.949052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.949132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.949294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.949335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.949490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.949529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.949701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.949740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.949877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.949914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.950072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.950119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.950252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.950286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.950424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.950462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.950605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.950642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.950834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.950889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.951051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-11-17 02:57:47.951104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.755 qpair failed and we were unable to recover it. 00:37:39.755 [2024-11-17 02:57:47.951247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.951282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.951422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.951457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.951633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.951685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.951968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.952008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.952191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.952241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.952442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.952494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.952733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.952789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.952997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.953034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.953203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.953239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.953373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.953407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.953535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.953588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.953807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.953903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.954056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.954090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.954231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.954265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.954445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.954483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.954604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.954643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.954813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.954852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.955033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.955071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.955226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.955275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.955402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.955450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.955640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.955690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.955822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.955861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.955980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.956015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.956176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.956211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.956346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.956380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.956499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.956537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.956725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.956762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.956910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.956947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.957124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.957165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.957278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.957314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.957485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.957528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.957738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.957807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.957954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.958006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.756 [2024-11-17 02:57:47.958140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-11-17 02:57:47.958175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.756 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.958344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.958398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.958594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.958662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.958791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.958842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.959028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.959062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.959240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.959274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.959436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.959470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.959656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.959714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.959860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.959899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.960054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.960088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.960261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.960303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.960424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.960462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.960637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.960675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.960792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.960831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.960979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.961029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.961174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.961210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.961318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.961352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.961535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.961572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.961799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.961837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.961982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.962020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.962182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.962217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.962348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.962382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.962528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.962596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.962862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.962920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.963101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.963139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.963310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.963346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.963484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.963518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.963621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.963675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.963864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.963904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.964065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.964106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.964209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.964244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.964377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.757 [2024-11-17 02:57:47.964412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.757 qpair failed and we were unable to recover it. 00:37:39.757 [2024-11-17 02:57:47.964624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.964661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.964839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.964877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.965065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.965113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.965269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.965304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.965435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.965473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.965617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.965656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.965829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.965867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.966066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.966125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.966249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.966287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.966427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.966482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.966662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.966715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.966828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.966864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.967027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.967062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.967268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.967322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.967495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.967549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.967734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.967789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.967903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.967937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.968110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.968154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.968303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.968362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.968509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.968561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.968719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.968760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.968865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.968904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.969058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.969092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.969235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.969270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.969376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.969411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.969571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.969611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.969753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.969790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.969932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.969970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.970154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.970192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.970364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.970418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.970610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.970652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.970782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.758 [2024-11-17 02:57:47.970823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.758 qpair failed and we were unable to recover it. 00:37:39.758 [2024-11-17 02:57:47.971017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.971057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.971223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.971265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.971384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.971420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.971582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.971621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.971729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.971767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.971938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.971982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.972164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.972209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.972385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.972423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.972556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.972613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.972925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.972999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.973152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.973205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.973321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.973355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.973472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.973510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.973664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.973716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.973906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.973948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.974072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.974121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.974239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.974275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.974395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.974435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.974581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.974616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.974786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.974852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.974978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.975021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.975185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.975221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.975317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.975351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.975499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.975538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.975683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.975720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.975866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.975904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.759 [2024-11-17 02:57:47.976032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.759 [2024-11-17 02:57:47.976093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.759 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.976267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.976301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.976517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.976553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.976712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.976762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.976952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.976987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.977131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.977167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.977279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.977314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.977570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.977608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.977754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.977799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.977955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.977992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.978152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.978187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.978329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.978365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.978557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.978595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.978763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.978809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.979006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.979044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.979201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.979236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.979343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.979401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.979586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.979620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.979731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.979785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.979926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.979964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.980104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.980140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.980244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.980278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.980417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.980452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.980591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.980629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.980748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.980794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.980980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.981018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.981164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.981200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.981302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.981342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.981478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.981515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.981697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.981735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.981879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.981916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.760 [2024-11-17 02:57:47.982067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.760 [2024-11-17 02:57:47.982118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.760 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.982265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.982315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.982519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.982573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.982770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.982812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.982934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.982978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.983111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.983148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.983314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.983352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.983498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.983543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.983825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.983881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.984044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.984091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.984249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.984283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.984426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.984463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.984574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.984627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.984783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.984838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.985073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.985123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.985278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.985327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.985526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.985594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.985822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.985880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.986027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.986067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.986257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.986292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.986449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.986492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.986625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.986698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.986876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.986915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.987060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.987116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.987223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.987258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.987440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.987503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.987726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.987786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.988005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.988051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.988244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.988294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.988417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.988488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.988768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.988829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.989014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.989053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.989223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.989260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.761 [2024-11-17 02:57:47.989425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.761 [2024-11-17 02:57:47.989460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.761 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.989655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.989713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.989969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.990029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.990220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.990260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.990365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.990405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.990574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.990609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.990788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.990827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.991021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.991077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.991261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.991298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.991405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.991440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.991573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.991633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.991906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.991965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.992159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.992194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.992317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.992353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.992526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.992564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.992770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.992808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.992957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.992996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.993168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.993205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.993364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.993398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.993522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.993561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.993716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.993754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.993904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.993943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.994076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.994163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.994308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.994342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.994544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.994593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.994852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.994911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.995026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.995063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.995222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.995258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.995359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.995395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.995595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.995633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.995776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.995841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.996016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.996054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.996206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.996241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.996401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.996440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.996700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.996737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.996892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.996944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.997077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.997120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.762 qpair failed and we were unable to recover it. 00:37:39.762 [2024-11-17 02:57:47.997239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.762 [2024-11-17 02:57:47.997277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:47.997386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:47.997420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:47.997564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:47.997602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:47.997754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:47.997792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:47.997936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:47.997974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:47.998163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:47.998212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:47.998358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:47.998411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:47.998562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:47.998611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:47.998751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:47.998788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:47.998957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:47.998997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:47.999155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:47.999191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:47.999302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:47.999338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:47.999497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:47.999535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:47.999682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:47.999720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:47.999901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:47.999939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:48.000089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:48.000133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:48.000291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:48.000328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:48.000484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:48.000522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:48.000674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:48.000712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:48.000874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:48.000916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:48.001045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:48.001086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:48.001262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:48.001298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:48.001471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:48.001507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:48.001641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:48.001676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:48.001855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:48.001915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:48.002106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:48.002143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:48.002250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:48.002287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:48.002430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:48.002464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:48.002674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:48.002709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:48.002977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:48.003015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:48.003147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:48.003185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:48.003300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:48.003339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.763 [2024-11-17 02:57:48.003505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.763 [2024-11-17 02:57:48.003545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.763 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.003822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.003880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.004002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.004044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.004187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.004222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.004339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.004377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.004490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.004527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.004680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.004756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.004895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.004932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.005077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.005135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.005322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.005361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.005491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.005538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.005701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.005737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.005874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.005936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.006050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.006091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.006274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.006315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.006555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.006617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.006829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.006887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.007072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.007119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.007250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.007291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.007393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.007444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.007639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.007708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.007813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.007854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.008010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.008048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.008235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.008270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.008384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.008422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.008566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.008603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.008794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.008834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.008982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.009021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.764 qpair failed and we were unable to recover it. 00:37:39.764 [2024-11-17 02:57:48.009206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.764 [2024-11-17 02:57:48.009241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.009356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.009392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.009566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.009603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.009807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.009866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.010046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.010088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.010252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.010302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.010461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.010500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.010634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.010675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.010877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.010946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.011133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.011186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.011320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.011354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.011491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.011526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.011693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.011767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.011908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.011964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.012160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.012210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.012396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.012445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.012558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.012595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.012735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.012789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.012997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.013032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.013174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.013210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.013312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.013347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.013482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.013516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.013691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.013744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.013897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.013937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.014060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.014137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.014251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.014287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.014472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.014513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.014653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.014688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.014859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.014916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.015058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.015112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.015271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.015306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.015470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.015504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.015646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.015680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.015834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.015900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.765 [2024-11-17 02:57:48.016075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.765 [2024-11-17 02:57:48.016132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.765 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.016294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.016344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.016456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.016491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.016627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.016661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.016763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.016814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.016937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.016976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.017165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.017204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.017323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.017360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.017529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.017567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.017724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.017760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.017862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.017916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.018091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.018174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.018320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.018358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.018522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.018558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.018697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.018731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.018834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.018869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.019005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.019039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.019164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.019201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.019326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.019375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.019529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.019568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.019758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.019812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.020011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.020050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.020187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.020226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.020344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.020381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.020604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.020655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.020964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.021020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.021181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.021216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.021323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.021358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.021492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.021531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.021679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.021718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.021819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.021856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.021987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.766 [2024-11-17 02:57:48.022022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.766 qpair failed and we were unable to recover it. 00:37:39.766 [2024-11-17 02:57:48.022181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.022238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.022361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.022421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.022600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.022638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.022808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.022847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.022993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.023031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.023215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.023250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.023526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.023577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.023798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.023855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.024004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.024042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.024181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.024216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.024354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.024389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.024564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.024603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.024890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.024942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.025066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.025113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.025239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.025274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.026208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.026249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.026387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.026422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.026609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.026651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.026801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.026839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.026990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.027040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.027189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.027225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.027361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.027407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.027563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.027602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.027789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.027838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.027961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.028002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.028136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.028195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.028354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.028400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.028513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.028547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.028680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.028723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.028830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.028864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.028998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.029034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.029214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.029264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.029398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.029439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.029605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.029641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.029781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.767 [2024-11-17 02:57:48.029816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.767 qpair failed and we were unable to recover it. 00:37:39.767 [2024-11-17 02:57:48.029959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.029996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.030114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.030153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.030267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.030304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.030488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.030532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.030777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.030835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.031003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.031049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.032027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.032073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.032266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.032302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.032445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.032481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.032625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.032660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.032803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.032839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.033005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.033041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.033193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.033230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.033334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.033370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.033481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.033516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.033651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.033686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.033805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.033841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.033983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.034018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.034165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.034201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.034375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.034416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.034564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.034600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.034742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.034779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.034912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.034966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.035155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.035190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.035330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.035366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.035504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.035540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.035644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.035679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.035796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.035831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.036007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.036073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.036239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.036294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.036466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.036503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.036677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.036712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.036866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.768 [2024-11-17 02:57:48.036906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.768 qpair failed and we were unable to recover it. 00:37:39.768 [2024-11-17 02:57:48.037043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.037089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.037236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.037273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.037427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.037463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.037622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.037657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.037823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.037858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.038017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.038053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.038184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.038219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.038350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.038395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.038507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.038542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.038699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.038750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.038899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.038941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.039143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.039185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.039325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.039365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.039513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.039552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.039669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.039709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.039862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.039901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.040021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.040056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.040223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.040271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.040399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.040449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.040610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.040667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.040825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.040879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.041009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.041045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.041260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.041303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.041445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.041481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.041743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.041803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.041964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.041999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.042144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.042181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.042321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.042393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.042555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.042594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.042823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.042897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.043052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.043112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.769 [2024-11-17 02:57:48.043266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.769 [2024-11-17 02:57:48.043306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.769 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.043499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.043555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.043806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.043866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.044049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.044091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.044223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.044259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.044418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.044456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.044771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.044834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.044984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.045023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.045213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.045263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.045399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.045438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.045724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.045782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.045990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.046052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.046234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.046269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.046414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.046471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.046682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.046740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.046891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.046932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.047139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.047176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.047338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.047373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.047519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.047557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.047705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.047757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.047924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.047964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.048121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.048163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.048393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.048430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.048678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.048717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.048855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.048922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.049090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.049139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.049258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.049292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.049472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.049508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.049616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.049678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.049885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.049949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.050133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.050190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.050308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.050342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.050490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.050526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.050656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.050695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.050854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.050892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.051070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.770 [2024-11-17 02:57:48.051123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.770 qpair failed and we were unable to recover it. 00:37:39.770 [2024-11-17 02:57:48.051254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.051288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.051427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.051466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.051618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.051656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.051825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.051862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.052059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.052122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.052286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.052324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.052458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.052495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.052630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.052666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.052782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.052835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.053006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.053058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.053207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.053243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.053373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.053444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.053685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.053752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.053876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.053913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.054049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.054090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.054229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.054268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.054435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.054473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.054602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.054647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.054784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.054828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.054972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.055014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.055194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.055231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.055371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.055409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.055537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.055585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.055731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.055790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.055941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.055982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.056160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.056205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.056323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.056360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.056524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.056579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.056735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.056799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.056953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.056990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.057111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.057147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.057285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.057320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.057453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.057496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.057675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.057714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.771 qpair failed and we were unable to recover it. 00:37:39.771 [2024-11-17 02:57:48.057857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.771 [2024-11-17 02:57:48.057921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.058053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.058110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.058234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.058269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.058442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.058505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.058671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.058711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.058875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.058915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.059060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.059122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.059254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.059300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.059439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.059475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.059641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.059675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.059931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.059992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.060136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.060193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.060318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.060355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.060501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.060556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.060729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.060787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.060917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.060956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.061127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.061162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.061277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.061312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.061475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.061515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.061719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.061763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.061871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.061911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.062041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.062106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.062261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.062296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.062406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.062442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.062568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.062604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.062763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.062800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.062945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.062983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.063117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.063150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.063268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.063300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.063467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.063506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.063657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.063693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.063838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.063882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.064038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.064072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.064215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.064264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.772 qpair failed and we were unable to recover it. 00:37:39.772 [2024-11-17 02:57:48.064389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.772 [2024-11-17 02:57:48.064426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.064636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.064703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.064868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.064907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.065092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.065139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.066042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.066085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.066247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.066282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.066386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.066420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.066578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.066619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.066773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.066828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.066978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.067027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.067149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.067200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.067320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.067354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.067477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.067535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.067687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.067724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.067954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.067991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.068128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.068178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.068303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.068351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.068558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.068632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.068832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.068886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.069048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.069089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.069213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.069248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.069415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.069455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.069629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.069666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.069823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.069869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.070107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.070143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.070351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.070393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.070520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.070571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.070759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.070823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.070980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.071020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.071160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.071196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.071328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.071364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.071519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.071554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.071686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.773 [2024-11-17 02:57:48.071731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.773 qpair failed and we were unable to recover it. 00:37:39.773 [2024-11-17 02:57:48.071862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.071901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.072088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.072163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.072385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.072418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.072636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.072683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.072862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.072905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.073060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.073093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.073207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.073241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.073352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.073397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.073585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.073662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.073810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.073851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.074016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.074055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.074214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.074250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.074399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.074433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.074565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.074612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.074777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.074840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.074983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.075022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.075170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.075221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.075351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.075399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.076381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.076422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.076657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.076716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.076838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.076879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.077062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.077111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.077241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.077275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.077388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.077422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.077558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.077596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.077792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.077833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.077955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.077993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.078143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.078209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.078336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.078372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.078494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.078554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.078743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.774 [2024-11-17 02:57:48.078795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.774 qpair failed and we were unable to recover it. 00:37:39.774 [2024-11-17 02:57:48.078943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.078982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.079142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.079192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.079308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.079347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.079604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.079642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.079790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.079827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.079946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.079986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.080138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.080172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.080298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.080342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.080516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.080552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.080679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.080718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.080849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.080889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.081057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.081105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.081242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.081285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.081442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.081490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.081722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.081763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.081993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.082026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.082153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.082188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.082300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.082336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.082565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.082629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.082773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.082828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.082980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.083026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.083195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.083244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.083417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.083484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.083686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.083742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.083934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.083992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.084144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.084179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.084306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.084355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.084515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.084571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.084768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.084808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.084952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.084995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.085168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.085203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.775 [2024-11-17 02:57:48.085334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.775 [2024-11-17 02:57:48.085375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.775 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.085531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.085567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.085676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.085713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.085869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.085905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.086032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.086074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.086260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.086295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.086465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.086534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.086697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.086761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.086919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.086962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.087108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.087173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.087295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.087331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.087461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.087508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.087661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.087696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.087880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.087925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.088076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.088129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.088261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.088294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.088418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.088453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.088636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.088699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.088828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.088876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.089037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.089070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.089206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.089255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.089437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.089509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.089665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.089719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.089888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.089930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.090101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.090156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.090287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.090321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.090489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.090526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.090711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.090748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.090880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.090932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.091092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.091157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.091266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.091307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.776 qpair failed and we were unable to recover it. 00:37:39.776 [2024-11-17 02:57:48.091452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.776 [2024-11-17 02:57:48.091485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.091610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.091643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.091785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.091824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.091979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.092016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.092145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.092194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.092363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.092398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.092564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.092616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.092797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.092834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.092962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.092998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.093154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.093188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.093321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.093354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.093579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.093617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.093847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.093886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.094078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.094149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.094287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.094321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.094482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.094522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.094696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.094744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.094909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.094948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.095128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.095167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.095299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.095332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.095500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.095538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.095739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.095781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.095929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.095967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.096125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.096184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.096326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.096359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.096501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.096544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.096685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.096722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.096941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.096978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.097123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.097185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.097316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.097365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.097504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.097542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.097695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.097732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.097879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.097925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.098094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.098149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.098310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.098358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.777 qpair failed and we were unable to recover it. 00:37:39.777 [2024-11-17 02:57:48.098512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.777 [2024-11-17 02:57:48.098548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.098663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.098695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.098840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.098888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.099106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.099163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.099306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.099341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.099505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.099543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.099687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.099743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.099979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.100021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.100153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.100201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.100315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.100349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.100494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.100528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.100692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.100744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.100892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.100929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.101074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.101121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.101283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.101331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.101487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.101554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.101703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.101751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.101957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.101994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.102149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.102206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.102345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.102405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.102632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.102670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.102810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.102869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.103047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.103090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.103223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.103261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.103437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.103481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.103637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.103675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.103840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.103877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.104081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.104166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.104294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.104331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.104463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.104496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.104651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.104703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.104909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.104964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.105135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.105172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.105340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.778 [2024-11-17 02:57:48.105375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.778 qpair failed and we were unable to recover it. 00:37:39.778 [2024-11-17 02:57:48.105534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.105569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.105719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.105757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.105937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.105975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.106129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.106165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.106306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.106340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.106527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.106569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.106735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.106771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.106996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.107032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.107177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.107211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.107344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.107378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.107581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.107617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.107757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.107813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.107968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.108009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.108189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.108225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.108393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.108428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.108576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.108612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.108754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.108792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.108946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.108996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.109219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.109265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.109373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.109407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.109547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.109588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.109755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.109788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.109910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.109945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.110074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.110128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.110259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.110293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.110453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.110489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.110653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.110693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.110796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.110830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.111013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.111090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.111268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.111308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.111427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.111467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.111589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.111632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.111783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.111820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.111946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.111981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.112132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.779 [2024-11-17 02:57:48.112184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.779 qpair failed and we were unable to recover it. 00:37:39.779 [2024-11-17 02:57:48.112330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.112363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.112542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.112579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.112723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.112761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.112872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.112908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.113021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.113057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.113259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.113307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.113497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.113566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.113739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.113791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.113970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.114017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.114155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.114191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.114307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.114342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.114489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.114523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.114679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.114717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.114862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.114899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.115044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.115111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.115251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.115286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.115455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.115489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.115589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.115648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.115874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.115912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.116092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.116188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.116359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.116429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.116628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.116677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.116797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.116837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.117037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.117070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.117208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.117242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.117387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.117453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.117583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.117620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.117777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.117815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.118004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.118041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.118180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.780 [2024-11-17 02:57:48.118221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.780 qpair failed and we were unable to recover it. 00:37:39.780 [2024-11-17 02:57:48.118348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.118382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.118549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.118598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.118776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.118817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.118937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.118976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.119125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.119165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.119302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.119336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.119480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.119514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.119636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.119703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.119838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.119882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.120758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.120802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.120942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.120977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.121123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.121157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.121297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.121331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.121446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.121498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.121656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.121694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.121836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.121874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.122052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.122109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.122270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.122304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.122453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.122495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.122631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.122665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.122845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.122914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.123034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.123080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.123231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.123284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.123409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.123447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.123634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.123679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.123828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.123865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.124011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.124061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.124969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.125017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.125179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.125213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.125320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.125355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.125524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.125560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.125674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.125713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.125888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.125941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.126114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.126151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.126290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.126324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.781 [2024-11-17 02:57:48.126437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.781 [2024-11-17 02:57:48.126472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.781 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.126643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.126681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.126835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.126873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.126988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.127027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.127203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.127237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.127367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.127414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.127611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.127647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.127830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.127869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.127990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.128029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.128198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.128239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.128349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.128382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.128499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.128537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.128671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.128708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.128893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.128930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.129046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.129093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.129247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.129284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.129445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.129479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.129645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.129687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.129837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.129875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.130105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.130156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.130295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.130329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.130482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.130518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.130670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.130725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.130918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.130955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.131112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.131163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.131297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.131331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.131504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.131537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.131725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.131764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.131895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.131934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.132094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.132162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.132357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.132413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.132550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.132585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.132745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.132780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.132914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.132948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.782 [2024-11-17 02:57:48.133112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.782 [2024-11-17 02:57:48.133148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.782 qpair failed and we were unable to recover it. 00:37:39.783 [2024-11-17 02:57:48.133259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.783 [2024-11-17 02:57:48.133292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.783 qpair failed and we were unable to recover it. 00:37:39.783 [2024-11-17 02:57:48.133409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.783 [2024-11-17 02:57:48.133442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.783 qpair failed and we were unable to recover it. 00:37:39.783 [2024-11-17 02:57:48.133597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.783 [2024-11-17 02:57:48.133630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.783 qpair failed and we were unable to recover it. 00:37:39.783 [2024-11-17 02:57:48.133765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.783 [2024-11-17 02:57:48.133798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.783 qpair failed and we were unable to recover it. 00:37:39.783 [2024-11-17 02:57:48.133945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.783 [2024-11-17 02:57:48.133981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.783 qpair failed and we were unable to recover it. 00:37:39.783 [2024-11-17 02:57:48.134135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.783 [2024-11-17 02:57:48.134171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.783 qpair failed and we were unable to recover it. 00:37:39.783 [2024-11-17 02:57:48.134306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.783 [2024-11-17 02:57:48.134354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.783 qpair failed and we were unable to recover it. 00:37:39.783 [2024-11-17 02:57:48.134504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.783 [2024-11-17 02:57:48.134543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.783 qpair failed and we were unable to recover it. 00:37:39.783 [2024-11-17 02:57:48.134677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.783 [2024-11-17 02:57:48.134714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.783 qpair failed and we were unable to recover it. 00:37:39.783 [2024-11-17 02:57:48.134840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.783 [2024-11-17 02:57:48.134875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.783 qpair failed and we were unable to recover it. 00:37:39.783 [2024-11-17 02:57:48.135013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.783 [2024-11-17 02:57:48.135048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.783 qpair failed and we were unable to recover it. 00:37:39.783 [2024-11-17 02:57:48.135200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.783 [2024-11-17 02:57:48.135234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.783 qpair failed and we were unable to recover it. 00:37:39.783 [2024-11-17 02:57:48.135373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.783 [2024-11-17 02:57:48.135407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.783 qpair failed and we were unable to recover it. 00:37:39.783 [2024-11-17 02:57:48.135513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.783 [2024-11-17 02:57:48.135546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.783 qpair failed and we were unable to recover it. 00:37:39.783 [2024-11-17 02:57:48.135650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.783 [2024-11-17 02:57:48.135688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.783 qpair failed and we were unable to recover it. 00:37:39.783 [2024-11-17 02:57:48.135789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.783 [2024-11-17 02:57:48.135824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.783 qpair failed and we were unable to recover it. 00:37:39.783 [2024-11-17 02:57:48.135961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.783 [2024-11-17 02:57:48.135996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.783 qpair failed and we were unable to recover it. 00:37:39.783 [2024-11-17 02:57:48.136131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.783 [2024-11-17 02:57:48.136166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.783 qpair failed and we were unable to recover it. 00:37:39.783 [2024-11-17 02:57:48.136303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.783 [2024-11-17 02:57:48.136341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.783 qpair failed and we were unable to recover it. 00:37:39.783 [2024-11-17 02:57:48.136479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.783 [2024-11-17 02:57:48.136516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.783 qpair failed and we were unable to recover it. 00:37:39.783 [2024-11-17 02:57:48.136680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.783 [2024-11-17 02:57:48.136718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.370 qpair failed and we were unable to recover it. 00:37:40.370 [2024-11-17 02:57:48.529752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.370 [2024-11-17 02:57:48.529805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.370 qpair failed and we were unable to recover it. 00:37:40.370 [2024-11-17 02:57:48.529962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.370 [2024-11-17 02:57:48.530001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.370 qpair failed and we were unable to recover it. 00:37:40.370 [2024-11-17 02:57:48.530165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.370 [2024-11-17 02:57:48.530199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.370 qpair failed and we were unable to recover it. 00:37:40.370 [2024-11-17 02:57:48.530339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.370 [2024-11-17 02:57:48.530391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.370 qpair failed and we were unable to recover it. 00:37:40.370 [2024-11-17 02:57:48.530578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.370 [2024-11-17 02:57:48.530615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.370 qpair failed and we were unable to recover it. 00:37:40.370 [2024-11-17 02:57:48.530766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.370 [2024-11-17 02:57:48.530799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.370 qpair failed and we were unable to recover it. 00:37:40.370 [2024-11-17 02:57:48.530911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.370 [2024-11-17 02:57:48.530945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.370 qpair failed and we were unable to recover it. 00:37:40.370 [2024-11-17 02:57:48.531143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.370 [2024-11-17 02:57:48.531192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.370 qpair failed and we were unable to recover it. 00:37:40.370 [2024-11-17 02:57:48.531343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.370 [2024-11-17 02:57:48.531379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.370 qpair failed and we were unable to recover it. 00:37:40.370 [2024-11-17 02:57:48.531511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.370 [2024-11-17 02:57:48.531544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.370 qpair failed and we were unable to recover it. 00:37:40.370 [2024-11-17 02:57:48.531686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.370 [2024-11-17 02:57:48.531718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.370 qpair failed and we were unable to recover it. 00:37:40.370 [2024-11-17 02:57:48.531914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.370 [2024-11-17 02:57:48.531962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.370 qpair failed and we were unable to recover it. 00:37:40.370 [2024-11-17 02:57:48.532135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.370 [2024-11-17 02:57:48.532173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.370 qpair failed and we were unable to recover it. 00:37:40.370 [2024-11-17 02:57:48.532294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.370 [2024-11-17 02:57:48.532332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.370 qpair failed and we were unable to recover it. 00:37:40.370 [2024-11-17 02:57:48.532510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.370 [2024-11-17 02:57:48.532543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.370 qpair failed and we were unable to recover it. 00:37:40.370 [2024-11-17 02:57:48.532744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.370 [2024-11-17 02:57:48.532780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.370 qpair failed and we were unable to recover it. 00:37:40.370 [2024-11-17 02:57:48.532903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.370 [2024-11-17 02:57:48.532941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.370 qpair failed and we were unable to recover it. 00:37:40.370 [2024-11-17 02:57:48.533121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.370 [2024-11-17 02:57:48.533157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.370 qpair failed and we were unable to recover it. 00:37:40.370 [2024-11-17 02:57:48.533271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.370 [2024-11-17 02:57:48.533306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.370 qpair failed and we were unable to recover it. 00:37:40.370 [2024-11-17 02:57:48.533542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.370 [2024-11-17 02:57:48.533577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.370 qpair failed and we were unable to recover it. 00:37:40.370 [2024-11-17 02:57:48.533746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.370 [2024-11-17 02:57:48.533782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.370 qpair failed and we were unable to recover it. 00:37:40.370 [2024-11-17 02:57:48.534005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.534044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.534210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.534250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.534383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.534419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.534621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.534675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.534889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.534928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.535060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.535103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.535278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.535326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.535453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.535504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.535660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.535694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.535799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.535849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.535998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.536039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.536212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.536248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.536405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.536451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.536641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.536690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.536816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.536852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.537057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.537094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.537276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.537312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.537474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.537509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.537615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.537648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.537846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.537881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.538077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.538121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.538232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.538266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.538439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.538476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.538821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.538856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.539013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.539067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.539246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.539282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.539439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.539475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.539644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.539679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.539886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.539923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.540033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.540066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.540216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.540257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.540411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.540450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.540581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.540615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.371 qpair failed and we were unable to recover it. 00:37:40.371 [2024-11-17 02:57:48.540774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.371 [2024-11-17 02:57:48.540827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.540989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.541030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.541194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.541230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.541373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.541438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.541700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.541773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.541936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.541971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.542111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.542156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.542299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.542334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.542509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.542544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.542651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.542683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.542804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.542840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.542968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.543003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.543145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.543179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.543282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.543317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.543489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.543524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.543676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.543716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.543841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.543880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.544021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.544056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.544175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.544213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.544375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.544441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.544581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.544617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.544753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.544789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.544883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.544917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.545052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.545088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.545241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.545275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.545424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.545464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.545592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.545625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.545751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.545786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.545925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.545964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.546130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.546176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.546314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.546348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.546518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.372 [2024-11-17 02:57:48.546557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.372 qpair failed and we were unable to recover it. 00:37:40.372 [2024-11-17 02:57:48.546723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.546759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.546943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.546983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.547165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.547203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.547332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.547368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.547542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.547581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.547810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.547846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.547987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.548022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.548144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.548178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.548335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.548375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.548535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.548571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.548704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.548756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.548929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.548963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.549126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.549162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.549318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.549358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.549529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.549568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.549692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.549726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.549867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.549902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.550065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.550112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.550254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.550289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.550430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.550469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.550606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.550645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.550805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.550840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.550979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.551033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.551213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.551264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.551437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.551476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.551667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.551708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.551854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.551893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.552028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.552070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.552218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.552254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.552365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.552399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.552558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.552592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.552718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.552752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.373 [2024-11-17 02:57:48.552906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.373 [2024-11-17 02:57:48.552943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.373 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.553105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.553140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.553302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.553337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.553470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.553505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.553641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.553675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.553841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.553879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.554021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.554059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.554235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.554270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.554371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.554419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.554614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.554649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.554808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.554843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.554998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.555036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.555228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.555262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.555424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.555458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.555586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.555640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.555804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.555839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.555970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.556004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.556106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.556139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.556266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.556300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.556407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.556439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.556577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.556612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.556750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.556787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.556939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.556973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.557110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.557165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.557339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.557395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.557584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.557622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.557731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.557784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.557935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.557975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.558144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.558181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.558284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.558318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.558484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.558523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.558651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.558687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.558786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.374 [2024-11-17 02:57:48.558819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.374 qpair failed and we were unable to recover it. 00:37:40.374 [2024-11-17 02:57:48.558949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.375 [2024-11-17 02:57:48.558988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.375 qpair failed and we were unable to recover it. 00:37:40.375 [2024-11-17 02:57:48.559138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.375 [2024-11-17 02:57:48.559173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.375 qpair failed and we were unable to recover it. 00:37:40.375 [2024-11-17 02:57:48.559305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.375 [2024-11-17 02:57:48.559372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.375 qpair failed and we were unable to recover it. 00:37:40.375 [2024-11-17 02:57:48.559529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.375 [2024-11-17 02:57:48.559568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.375 qpair failed and we were unable to recover it. 00:37:40.375 [2024-11-17 02:57:48.559744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.375 [2024-11-17 02:57:48.559778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.375 qpair failed and we were unable to recover it. 00:37:40.375 [2024-11-17 02:57:48.559911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.375 [2024-11-17 02:57:48.559945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.375 qpair failed and we were unable to recover it. 00:37:40.375 [2024-11-17 02:57:48.560115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.375 [2024-11-17 02:57:48.560169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.375 qpair failed and we were unable to recover it. 00:37:40.375 [2024-11-17 02:57:48.560281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.375 [2024-11-17 02:57:48.560316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.375 qpair failed and we were unable to recover it. 00:37:40.375 [2024-11-17 02:57:48.560450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.375 [2024-11-17 02:57:48.560485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.375 qpair failed and we were unable to recover it. 00:37:40.375 [2024-11-17 02:57:48.560627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.375 [2024-11-17 02:57:48.560662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.375 qpair failed and we were unable to recover it. 00:37:40.375 [2024-11-17 02:57:48.560840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.375 [2024-11-17 02:57:48.560875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.375 qpair failed and we were unable to recover it. 00:37:40.375 [2024-11-17 02:57:48.561015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.375 [2024-11-17 02:57:48.561051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.375 qpair failed and we were unable to recover it. 00:37:40.375 [2024-11-17 02:57:48.561260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.375 [2024-11-17 02:57:48.561295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.375 qpair failed and we were unable to recover it. 00:37:40.375 [2024-11-17 02:57:48.561435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.375 [2024-11-17 02:57:48.561469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.375 qpair failed and we were unable to recover it. 00:37:40.375 [2024-11-17 02:57:48.561601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.375 [2024-11-17 02:57:48.561636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.375 qpair failed and we were unable to recover it. 00:37:40.375 [2024-11-17 02:57:48.561764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.375 [2024-11-17 02:57:48.561799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.375 qpair failed and we were unable to recover it. 00:37:40.375 [2024-11-17 02:57:48.561968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.375 [2024-11-17 02:57:48.562002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.375 qpair failed and we were unable to recover it. 00:37:40.375 [2024-11-17 02:57:48.562158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.375 [2024-11-17 02:57:48.562196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.375 qpair failed and we were unable to recover it. 00:37:40.375 [2024-11-17 02:57:48.562382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.375 [2024-11-17 02:57:48.562423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.375 qpair failed and we were unable to recover it. 00:37:40.375 [2024-11-17 02:57:48.562608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.375 [2024-11-17 02:57:48.562643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.375 qpair failed and we were unable to recover it. 00:37:40.375 [2024-11-17 02:57:48.562824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.375 [2024-11-17 02:57:48.562863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.375 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.562981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.563020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.563182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.563218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.563349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.563386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.563547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.563586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.563713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.563748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.563855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.563887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.564024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.564059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.564200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.564234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.564463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.564519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.564671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.564760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.564924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.564961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.565127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.565165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.565333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.565388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.565548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.565586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.565707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.565760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.565912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.565950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.566120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.566156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.566285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.566319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.566453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.566487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.566592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.566624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.566758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.566792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.566890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.566927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.567112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.567148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.567325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.567363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.567569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.567628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.567766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.567800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.567963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.568002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.568167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.568203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.568363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.376 [2024-11-17 02:57:48.568398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.376 qpair failed and we were unable to recover it. 00:37:40.376 [2024-11-17 02:57:48.568598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.568636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.568756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.568793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.568946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.568981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.569122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.569156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.569319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.569369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.569531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.569570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.569720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.569757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.569867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.569903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.570045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.570081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.570225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.570276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.570465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.570504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.570661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.570696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.570798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.570831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.571016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.571053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.571224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.571258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.571392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.571426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.571615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.571648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.571782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.571816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.571979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.572015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.572151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.572202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.572343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.572382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.572522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.572558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.572731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.572787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.572909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.572945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.573113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.573168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.573387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.573460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.573648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.573684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.573795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.573845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.573996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.574033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.574204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.574239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.574348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.574382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.574644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.574704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.377 [2024-11-17 02:57:48.574831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.377 [2024-11-17 02:57:48.574866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.377 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.575008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.575042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.575237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.575287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.575436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.575475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.575622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.575659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.575773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.575809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.575969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.576005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.576167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.576204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.576341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.576377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.576508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.576544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.576703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.576738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.576852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.576888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.577033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.577067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.577214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.577249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.577394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.577433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.577618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.577653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.577760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.577793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.577900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.577937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.578077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.578121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.578268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.578304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.578465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.578501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.578606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.578641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.578774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.578810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.578976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.579012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.579140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.579176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.579330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.579398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.579620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.579660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.579808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.579851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.580024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.580063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.580203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.580241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.580413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.580449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.580585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.580621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.580764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.580801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.580936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.580972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.378 qpair failed and we were unable to recover it. 00:37:40.378 [2024-11-17 02:57:48.581151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.378 [2024-11-17 02:57:48.581202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.581371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.581427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.581560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.581596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.581742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.581795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.581986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.582022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.582153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.582189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.582328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.582365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.582537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.582573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.582743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.582779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.582913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.582949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.583119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.583156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.583272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.583307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.583442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.583479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.583604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.583640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.583769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.583819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.583947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.583983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.584143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.584179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.584316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.584351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.584485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.584521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.584732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.584797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.584955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.585000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.585144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.585182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.585320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.585354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.585483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.585518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.585615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.585647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.585768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.585823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.585965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.586001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.586146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.586183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.586340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.586376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.586514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.586550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.586655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.586689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.586826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.586865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.587026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.587062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.587199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.587240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.379 [2024-11-17 02:57:48.587344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.379 [2024-11-17 02:57:48.587379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.379 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.587522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.587557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.587683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.587736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.587887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.587927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.588055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.588091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.588262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.588296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.588416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.588451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.588626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.588660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.588790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.588824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.588964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.589004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.589168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.589204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.589308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.589342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.589500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.589540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.589720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.589756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.589885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.589938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.590119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.590160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.590299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.590335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.590475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.590509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.590723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.590759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.590941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.590994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.591179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.591232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.591378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.591416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.591555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.591591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.591700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.591737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.591887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.591938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.592101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.592137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.592310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.592348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.592489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.592540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.592678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.592714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.592853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.592887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.593017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.593052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.593193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.593229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.593358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.593413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.380 qpair failed and we were unable to recover it. 00:37:40.380 [2024-11-17 02:57:48.593557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.380 [2024-11-17 02:57:48.593595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.593763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.593797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.593941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.593977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.594130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.594181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.594352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.594402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.594597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.594653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.594816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.594877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.595004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.595041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.595196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.595247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.595388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.595425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.595583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.595642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.595817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.595875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.596018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.596064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.596231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.596283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.596427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.596510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.596620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.596658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.596826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.596881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.597026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.597073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.597233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.597288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.597408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.597450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.597626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.597689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.597905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.597973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.598176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.598212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.598399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.598438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.598619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.598658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.598822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.598881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.599008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.599043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.599170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.599205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.381 [2024-11-17 02:57:48.599323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.381 [2024-11-17 02:57:48.599362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.381 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.599557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.599597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.599801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.599840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.599979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.600016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.600187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.600221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.600333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.600368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.600540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.600577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.600739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.600792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.601021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.601058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.601245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.601278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.601464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.601519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.601816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.601858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.602018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.602058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.602224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.602261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.602378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.602425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.602558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.602594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.602736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.602772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.602959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.603010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.603173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.603229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.603359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.603426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.603689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.603750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.603931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.604029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.604192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.604228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.604335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.604372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.604488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.604587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.604854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.604913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.605021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.605060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.382 qpair failed and we were unable to recover it. 00:37:40.382 [2024-11-17 02:57:48.605271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.382 [2024-11-17 02:57:48.605322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.605539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.605594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.605764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.605832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.605994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.606031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.606146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.606182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.606304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.606339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.606446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.606482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.606658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.606722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.606918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.606998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.607186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.607235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.607382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.607422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.607592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.607678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.607845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.607903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.608035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.608069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.608210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.608245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.608425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.608476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.608624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.608662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.608776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.608811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.608994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.609030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.609193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.609243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.609429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.609478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.609656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.609717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.609924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.609980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.610130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.610166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.610319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.610375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.610562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.610614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.610767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.610821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.610973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.611012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.611168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.611218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.611401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.611455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.611583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.611668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.611880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.611942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.612112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.612148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.612316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.612369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.612524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.383 [2024-11-17 02:57:48.612580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.383 qpair failed and we were unable to recover it. 00:37:40.383 [2024-11-17 02:57:48.612687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.612721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.612834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.612871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.613018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.613057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.613270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.613325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.613514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.613556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.613740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.613800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.613961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.614001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.614186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.614222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.614338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.614376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.614547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.614587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.614747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.614798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.614957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.614994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.615136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.615172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.615328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.615378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.615516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.615552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.615714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.615748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.615905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.615943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.616087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.616147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.616328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.616378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.616559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.616615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.616764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.616815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.616965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.617018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.617190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.617226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.617386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.617435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.617626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.617665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.617802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.617855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.618013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.618047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.618198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.618233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.618359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.618393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.618545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.618582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.618781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.618819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.619020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.619058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.619230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.619264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.384 [2024-11-17 02:57:48.619418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.384 [2024-11-17 02:57:48.619455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.384 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.619588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.619639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.619746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.619782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.619913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.619951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.620091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.620151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.620311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.620345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.620440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.620490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.620638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.620675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.620849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.620886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.621005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.621055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.621229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.621297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.621461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.621517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.621653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.621707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.621860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.621916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.622094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.622160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.622318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.622368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.622572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.622638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.622772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.622813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.622988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.623027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.623198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.623247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.623382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.623423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.623614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.623677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.623778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.623812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.623922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.623959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.624102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.624137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.624231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.624283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.624396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.624433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.624646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.624684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.624914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.624974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.625137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.625174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.625330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.625384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.625549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.625607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.625722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.625757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.625885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.385 [2024-11-17 02:57:48.625933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.385 qpair failed and we were unable to recover it. 00:37:40.385 [2024-11-17 02:57:48.626080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.626151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.626269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.626304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.626429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.626467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.626653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.626710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.626882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.626920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.627049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.627082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.627218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.627267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.627421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.627477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.627718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.627787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.627929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.627974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.628170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.628207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.628343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.628378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.628487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.628542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.628689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.628729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.628874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.628914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.629086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.629167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.629333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.629399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.629613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.629655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.629852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.629892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.630026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.630072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.630201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.630238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.630406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.630443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.630547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.630600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.630787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.630886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.631030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.631069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.386 [2024-11-17 02:57:48.631244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.386 [2024-11-17 02:57:48.631294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.386 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.631450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.631500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.631697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.631759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.631968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.632023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.632156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.632192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.632312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.632362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.632554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.632595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.632845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.632905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.633026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.633061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.633187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.633222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.633392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.633441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.633662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.633701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.633968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.634028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.634211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.634247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.634413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.634446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.634634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.634695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.634887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.634946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.635092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.635134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.635267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.635300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.635527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.635582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.635726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.635780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.635930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.635971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.636148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.636201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.636341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.636376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.636581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.636643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.636801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.636842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.637042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.637092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.637224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.637259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.637422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.637459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.637602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.637641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.637851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.637890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.638054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.638106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.638256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.387 [2024-11-17 02:57:48.638292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.387 qpair failed and we were unable to recover it. 00:37:40.387 [2024-11-17 02:57:48.638388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.638441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.638664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.638723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.638976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.639034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.639229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.639265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.639423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.639464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.639748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.639820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.640006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.640057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.640194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.640231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.640365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.640405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.640658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.640718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.640838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.640876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.641017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.641056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.641202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.641237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.641362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.641411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.641686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.641741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.641962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.641998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.642158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.642212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.642402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.642457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.642723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.642778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.643048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.643117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.643245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.643279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.643414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.643459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.643599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.643634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.643794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.643849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.643979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.644014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.644133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.644169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.644286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.644321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.644435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.644471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.388 [2024-11-17 02:57:48.644606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.388 [2024-11-17 02:57:48.644652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.388 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.644791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.644828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.644966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.645001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.645179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.645240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.645398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.645440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.645619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.645658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.645841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.645876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.645985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.646020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.646192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.646241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.646438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.646492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.646677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.646732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.646829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.646863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.647007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.647042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.647199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.647253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.647410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.647451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.647651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.647718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.647930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.647968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.648151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.648188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.648343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.648403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.648723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.648785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.648940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.648983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.649179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.649214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.649346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.649393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.649573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.649628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.649830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.649885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.650078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.650146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.650310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.650349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.650499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.650539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.650664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.650705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.650830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.650870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.650996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.651040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.651226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.651263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.651419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.651476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.389 [2024-11-17 02:57:48.651668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.389 [2024-11-17 02:57:48.651724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.389 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.651871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.651940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.652124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.652161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.652283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.652346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.652511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.652565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.652716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.652769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.652903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.652939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.653077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.653120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.653267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.653302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.653473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.653510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.653644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.653683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.653802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.653852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.653999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.654035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.654205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.654259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.654369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.654413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.654657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.654716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.654878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.654913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.655081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.655128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.655267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.655302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.655456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.655495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.655694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.655734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.655957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.656007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.656131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.656169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.656331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.656370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.656493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.656531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.656753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.656823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.657011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.657044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.657190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.657225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.657373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.657423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.657627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.657708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.657895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.657953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.658103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.658150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.658249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.658282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.658450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.658488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.658701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.658770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.658920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.658969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.659178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.659212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.390 [2024-11-17 02:57:48.659338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.390 [2024-11-17 02:57:48.659394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.390 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.659552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.659607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.659764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.659820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.659958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.659994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.660123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.660185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.660310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.660359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.660634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.660676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.660789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.660829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.660970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.661022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.661137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.661173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.661341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.661378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.661514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.661575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.661703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.661742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.661917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.661962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.662111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.662156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.662272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.662309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.662505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.662547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.662680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.662719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.662912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.662980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.663139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.663194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.663367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.663411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.663542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.663580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.663684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.663722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.663863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.663905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.664057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.664145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.664324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.664363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.664561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.664617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.391 [2024-11-17 02:57:48.664853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.391 [2024-11-17 02:57:48.664909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.391 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.665028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.665064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.665261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.665330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.665557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.665599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.665812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.665852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.666042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.666078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.666252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.666302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.666512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.666568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.666797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.666839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.667019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.667058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.667236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.667272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.667452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.667502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.667641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.667685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.667827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.667865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.668028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.668064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.668219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.668254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.668475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.668537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.668784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.668845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.669007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.669046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.669223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.669258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.669361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.669412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.669609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.669674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.669930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.669989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.670114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.670181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.670333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.670371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.670599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.670639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.670907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.670973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.671182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.671217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.671384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.671442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.671574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.392 [2024-11-17 02:57:48.671613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.392 qpair failed and we were unable to recover it. 00:37:40.392 [2024-11-17 02:57:48.671870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.671928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.672093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.672135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.672276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.672313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.672531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.672587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.672862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.672923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.673116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.673175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.673319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.673360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.673489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.673527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.673680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.673718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.673859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.673899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.674026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.674066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.674256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.674305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.674473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.674528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.674683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.674737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.674894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.674946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.675050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.675084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.675212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.675248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.675417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.675469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.675650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.675702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.675839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.675874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.675981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.676018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.676214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.676263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.676472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.676527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.676852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.676914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.677065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.677115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.677257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.677292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.677453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.677491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.677685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.677741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.677913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.677951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.393 [2024-11-17 02:57:48.678141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.393 [2024-11-17 02:57:48.678191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.393 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.678342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.678390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.678549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.678603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.678753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.678806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.678987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.679037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.679304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.679353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.679530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.679568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.679716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.679753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.680002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.680060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.680215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.680251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.680386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.680440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.680593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.680649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.680891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.680947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.681083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.681129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.681292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.681344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.681492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.681528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.681666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.681700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.681833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.681868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.682010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.682060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.682202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.682242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.682404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.682452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.682617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.682672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.682889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.682950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.683086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.683128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.683242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.683279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.683464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.683519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.683684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.683745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.684010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.684068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.684263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.684319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.684477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.684531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.394 [2024-11-17 02:57:48.684742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.394 [2024-11-17 02:57:48.684799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.394 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.684933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.684968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.685116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.685157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.685338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.685374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.685509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.685558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.685689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.685723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.685848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.685897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.686058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.686119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.686276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.686313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.686471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.686508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.686655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.686692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.686807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.686845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.687004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.687041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.687241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.687295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.687503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.687571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.687835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.687877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.688040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.688076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.688227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.688264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.688417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.688470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.688723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.688762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.688901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.688940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.689094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.689166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.689322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.689381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.689573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.689628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.689753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.689794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.689960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.690001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.690145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.690183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.690345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.690397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.690546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.690597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.690807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.690847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.690993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.691045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.691218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.691268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.691448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.691516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.691686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.691742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.691880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.691915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.692057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.395 [2024-11-17 02:57:48.692094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.395 qpair failed and we were unable to recover it. 00:37:40.395 [2024-11-17 02:57:48.692273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.692329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.692515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.692555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.692758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.692824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.693009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.693043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.693187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.693225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.693383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.693434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.693622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.693661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.693916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.693976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.694091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.694158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.694317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.694352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.694602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.694655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.694894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.694932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.695086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.695129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.695251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.695285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.695459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.695496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.695728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.695794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.695940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.695977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.696168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.696202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.696335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.696371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.696483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.696535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.696686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.696723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.696877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.696930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.697068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.697119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.697274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.697309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.697446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.697481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.697656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.697693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.697852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.697907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.698039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.698077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.698225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.698261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.698369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.698423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.698560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.396 [2024-11-17 02:57:48.698611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.396 qpair failed and we were unable to recover it. 00:37:40.396 [2024-11-17 02:57:48.698746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.698819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.698980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.699015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.699118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.699152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.699282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.699315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.699458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.699508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.699629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.699679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.699818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.699866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.700043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.700080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.700214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.700248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.700359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.700392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.700589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.700657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.700896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.700938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.701073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.701152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.701296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.701333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.701581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.701639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.701814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.701853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.702079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.702128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.702255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.702293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.702458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.702525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.702798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.702858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.703021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.703057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.703198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.703233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.703353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.703390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.703543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.703593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.703795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.703859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.703974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.704010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.704147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.704185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.704324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.397 [2024-11-17 02:57:48.704359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.397 qpair failed and we were unable to recover it. 00:37:40.397 [2024-11-17 02:57:48.704491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.704525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.704755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.704794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.704926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.704966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.705134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.705171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.705326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.705365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.705544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.705583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.705700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.705741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.705957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.706013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.706131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.706167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.706351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.706404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.706561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.706614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.706826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.706880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.707029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.707079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.707234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.707271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.707409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.707444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.707575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.707614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.707734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.707773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.707947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.707985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.708143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.708179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.708330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.708396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.708608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.708663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.708847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.708887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.709010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.709049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.709210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.709246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.709413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.709452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.709572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.709612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.709763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.709801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.709987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.710037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.710185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.710221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.710358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.710419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.710564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.710603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.398 qpair failed and we were unable to recover it. 00:37:40.398 [2024-11-17 02:57:48.710738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.398 [2024-11-17 02:57:48.710792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.710959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.711014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.711232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.711282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.711456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.711525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.711722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.711783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.711926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.711963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.712155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.712210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.712362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.712415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.712547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.712604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.712744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.712797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.712929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.712966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.713113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.713159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.713319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.713357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.713557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.713596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.713777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.713817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.713945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.713981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.714125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.714172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.714344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.714399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.714598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.714659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.714915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.714973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.715111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.715159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.715316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.715350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.715580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.715615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.715851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.715922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.716057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.716090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.716244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.716296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.716439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.716476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.716628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.716667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.716808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.716846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.717021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.717071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.717244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.717293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.717478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.717533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.717793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.717837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.718003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.718039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.718194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.718230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.718346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.718382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.718593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.399 [2024-11-17 02:57:48.718653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.399 qpair failed and we were unable to recover it. 00:37:40.399 [2024-11-17 02:57:48.718765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.718802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.718919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.718962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.719152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.719186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.719288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.719344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.719463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.719501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.719698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.719767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.719924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.719962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.720071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.720114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.720270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.720324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.720476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.720529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.720657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.720708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.720870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.720906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.721036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.721071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.721278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.721315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.721491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.721571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.721796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.721834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.722019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.722055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.722251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.722291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.722457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.722498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.722767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.722831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.723015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.723049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.723201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.723236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.723390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.723430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.723620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.723687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.723806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.723846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.724009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.724043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.724172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.724223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.724390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.724440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.724559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.724614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.724884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.724944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.725074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.725116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.725230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.725262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.725415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.400 [2024-11-17 02:57:48.725452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.400 qpair failed and we were unable to recover it. 00:37:40.400 [2024-11-17 02:57:48.725566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.725604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.725818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.725875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.726048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.726089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.726251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.726301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.726523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.726601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.726730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.726772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.726940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.726976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.727121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.727161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.727287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.727341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.727501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.727542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.727681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.727735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.727847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.727896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.728116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.728157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.728319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.728352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.728471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.728524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.728710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.728764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.728992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.729029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.729217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.729253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.729396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.729451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.729592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.729650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.729861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.729921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.730102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.730168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.730343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.730396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.730526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.730582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.730707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.730747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.730923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.730959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.731113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.731171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.731352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.731390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.731524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.731561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.731832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.731890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.732054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.732090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.732251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.401 [2024-11-17 02:57:48.732299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.401 qpair failed and we were unable to recover it. 00:37:40.401 [2024-11-17 02:57:48.732459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.732515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.732631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.732667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.732851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.732906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.733072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.733137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.733304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.733352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.733524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.733562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.733728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.733764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.733909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.733946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.734056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.734092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.734275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.734345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.734484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.734525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.734721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.734798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.734944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.734983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.735111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.735152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.735271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.735310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.735487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.735523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.735656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.735740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.735920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.735959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.736072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.736118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.736270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.736319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.736457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.736499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.736700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.736761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.736933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.736972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.737145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.737183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.737314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.737363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.737476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.737529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.737640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.737678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.737894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.737962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.738133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.738199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.402 [2024-11-17 02:57:48.738326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.402 [2024-11-17 02:57:48.738363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.402 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.738555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.738626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.738827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.738884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.739029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.739066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.739225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.739263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.739430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.739467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.739687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.739727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.739999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.740059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.740255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.740303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.740494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.740536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.740756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.740816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.740944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.741009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.741154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.741190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.741304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.741339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.741500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.741559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.741739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.741812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.741957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.742012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.742159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.742195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.742330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.742365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.742496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.742555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.742715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.742760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.742910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.742959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.743104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.743170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.743318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.743355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.743510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.743564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.743723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.743784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.743956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.743997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.744114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.744161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.744268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.744303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.744467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.744504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.744669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.744705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.744847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.744888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.745038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.745074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.745250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.745299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.745466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.403 [2024-11-17 02:57:48.745507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.403 qpair failed and we were unable to recover it. 00:37:40.403 [2024-11-17 02:57:48.745697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.745737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.745891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.745937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.746109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.746172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.746332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.746392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.746514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.746553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.746721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.746775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.746910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.746946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.747109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.747156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.747254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.747288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.747445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.747481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.747608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.747643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.747808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.747843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.747980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.748015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.748138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.748173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.748308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.748341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.748673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.748731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.748879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.748917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.749062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.749110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.749241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.749276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.749456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.749509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.749684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.749722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.749985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.750034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.750192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.750228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.750394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.750449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.750713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.750771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.750908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.750970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.751205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.751241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.751415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.751467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.751675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.751714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.751827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.751870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.752045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.752110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.404 qpair failed and we were unable to recover it. 00:37:40.404 [2024-11-17 02:57:48.752288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.404 [2024-11-17 02:57:48.752372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.752639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.752704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.752956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.753016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.753203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.753238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.753346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.753380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.753520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.753571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.753812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.753873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.754028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.754062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.754198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.754247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.754396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.754433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.754646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.754686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.754879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.754915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.755082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.755152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.755261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.755295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.755542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.755614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.755823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.755874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.756008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.756042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.756185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.756224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.756402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.756439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.756601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.756679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.756889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.756952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.757112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.757172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.757309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.757358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.757532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.757588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.757799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.757859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.758002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.758038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.758158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.758193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.758328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.758378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.758580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.758642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.758789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.758848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.758977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.759013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.759155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.759211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.405 [2024-11-17 02:57:48.759369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.405 [2024-11-17 02:57:48.759428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.405 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.759656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.759721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.759859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.759894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.760033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.760069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.760201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.760236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.760396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.760436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.760701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.760768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.760977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.761019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.761202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.761238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.761348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.761409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.761667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.761705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.761918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.761976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.762176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.762212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.762370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.762409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.762581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.762621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.762826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.762885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.763112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.763179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.763325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.763364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.763497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.763551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.763803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.763860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.764016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.764067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.764240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.764276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.764410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.764465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.764713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.764756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.764968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.765025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.765164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.765200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.765343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.765377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.765546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.765585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.765846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.765903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.766083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.766126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.766234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.766268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.766420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.406 [2024-11-17 02:57:48.766460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.406 qpair failed and we were unable to recover it. 00:37:40.406 [2024-11-17 02:57:48.766721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.766778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.767044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.767113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.767307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.767343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.767482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.767516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.767652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.767691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.767903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.767964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.768127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.768162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.768324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.768359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.768619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.768658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.768856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.768917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.769055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.769094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.769246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.769280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.769442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.769475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.769643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.769712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.769832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.769870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.770018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.770056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.770245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.770295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.770430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.770485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.770742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.770786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.770965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.771005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.771190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.771226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.771351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.771401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.771594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.771659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.771880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.771939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.772085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.772127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.772246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.772280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.772448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.772516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.772781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.772824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.772977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.773029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.773171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.773208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.407 qpair failed and we were unable to recover it. 00:37:40.407 [2024-11-17 02:57:48.773318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.407 [2024-11-17 02:57:48.773352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.773488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.408 [2024-11-17 02:57:48.773555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.773718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.408 [2024-11-17 02:57:48.773760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.773922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.408 [2024-11-17 02:57:48.773985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.774195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.408 [2024-11-17 02:57:48.774231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.774346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.408 [2024-11-17 02:57:48.774403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.774562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.408 [2024-11-17 02:57:48.774601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.774763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.408 [2024-11-17 02:57:48.774802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.774983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.408 [2024-11-17 02:57:48.775027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.775235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.408 [2024-11-17 02:57:48.775271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.775373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.408 [2024-11-17 02:57:48.775405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.775547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.408 [2024-11-17 02:57:48.775585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.775757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.408 [2024-11-17 02:57:48.775796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.775968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.408 [2024-11-17 02:57:48.776006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.776183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.408 [2024-11-17 02:57:48.776225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.776390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.408 [2024-11-17 02:57:48.776444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.776569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.408 [2024-11-17 02:57:48.776607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.776760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.408 [2024-11-17 02:57:48.776797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.776972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.408 [2024-11-17 02:57:48.777011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.777122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.408 [2024-11-17 02:57:48.777174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.777308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.408 [2024-11-17 02:57:48.777347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.777474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.408 [2024-11-17 02:57:48.777509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.777621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.408 [2024-11-17 02:57:48.777655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.408 qpair failed and we were unable to recover it. 00:37:40.408 [2024-11-17 02:57:48.777761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.777798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.777951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.777992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.778132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.778168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.778331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.778369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.778506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.778570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.778733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.778772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.778924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.778964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.779127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.779163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.779298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.779341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.779476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.779509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.779665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.779703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.779881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.779936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.780141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.780178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.780285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.780318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.780455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.780490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.780654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.780689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.780854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.780893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.781056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.781107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.781292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.781342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.781496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.781534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.781649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.781686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.781864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.781901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.782079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.782144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.782309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.782348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.782503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.782542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.782703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.782738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.782893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.782932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.783082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.783134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.783269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.783306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.783438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.783477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.783619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.783658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.783848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.409 [2024-11-17 02:57:48.783887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.409 qpair failed and we were unable to recover it. 00:37:40.409 [2024-11-17 02:57:48.784010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.784053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.784225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.784265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.784375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.784410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.784553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.784588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.784733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.784768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.784956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.785005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.785158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.785194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.785304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.785336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.785445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.785479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.785586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.785620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.785823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.785892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.786083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.786142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.786293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.786334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.786496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.786547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.786805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.786863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.787011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.787049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.787247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.787284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.787425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.787461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.787644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.787693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.787849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.787886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.788036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.788073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.788216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.788252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.788391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.788427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.788560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.788596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.788702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.788757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.788908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.788944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.789128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.789165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.789348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.789397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.789522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.789562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.410 qpair failed and we were unable to recover it. 00:37:40.410 [2024-11-17 02:57:48.789703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.410 [2024-11-17 02:57:48.789740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.789902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.789941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.790086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.790167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.790277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.790311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.790414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.790446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.790578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.790615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.790823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.790864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.791024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.791063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.791215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.791254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.791396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.791434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.791570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.791613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.791738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.791773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.791907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.791941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.792080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.792129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.792273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.792307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.792471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.792507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.792690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.792731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.792924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.792960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.793074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.793118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.793261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.793297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.793407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.793443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.793580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.793615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.793779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.793815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.793922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.793958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.794112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.794149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.794324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.794360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.794472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.794507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.794643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.794678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.794808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.794848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.794989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.795023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.795187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.795241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.795385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.795423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.795586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.795622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.411 [2024-11-17 02:57:48.795740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.411 [2024-11-17 02:57:48.795780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.411 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.795927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.795963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.796105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.796140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.796283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.796317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.796462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.796496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.796600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.796650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.796812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.796863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.797029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.797063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.797185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.797218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.797362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.797396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.797530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.797563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.797728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.797763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.797872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.797905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.798052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.798114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.798267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.798306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.798443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.798480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.798601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.798637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.798798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.798838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.798949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.798985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.799113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.799149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.799299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.799335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.799495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.799534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.799655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.799690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.799856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.799891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.800023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.800059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.800233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.800270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.800422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.800460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.800574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.800608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.800769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.800802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.800902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.800934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.801046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.801082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.801263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.801297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.801453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.801487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.801619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.412 [2024-11-17 02:57:48.801654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.412 qpair failed and we were unable to recover it. 00:37:40.412 [2024-11-17 02:57:48.801721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:37:40.412 [2024-11-17 02:57:48.801937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.801988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.802138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.802179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.802299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.802336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.802447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.802483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.802623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.802661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.802802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.802837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.802949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.802984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.803089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.803139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.803270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.803304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.803438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.803480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.803615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.803651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.803794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.803832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.803999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.804035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.804194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.804230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.804364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.804398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.804506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.804544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.804715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.804750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.804862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.804895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.805033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.805076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.805232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.805267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.805409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.805446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.805593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.805642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.805757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.805795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.805947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.805984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.806123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.806159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.806318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.806352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.806506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.806542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.806676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.806710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.806849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.806888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.807028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.413 [2024-11-17 02:57:48.807062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.413 qpair failed and we were unable to recover it. 00:37:40.413 [2024-11-17 02:57:48.807212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.414 [2024-11-17 02:57:48.807247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.414 qpair failed and we were unable to recover it. 00:37:40.414 [2024-11-17 02:57:48.807364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.414 [2024-11-17 02:57:48.807402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.414 qpair failed and we were unable to recover it. 00:37:40.414 [2024-11-17 02:57:48.807538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.414 [2024-11-17 02:57:48.807572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.414 qpair failed and we were unable to recover it. 00:37:40.414 [2024-11-17 02:57:48.807676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.414 [2024-11-17 02:57:48.807709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.414 qpair failed and we were unable to recover it. 00:37:40.414 [2024-11-17 02:57:48.807816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.414 [2024-11-17 02:57:48.807856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.414 qpair failed and we were unable to recover it. 00:37:40.414 [2024-11-17 02:57:48.808001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.414 [2024-11-17 02:57:48.808036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.414 qpair failed and we were unable to recover it. 00:37:40.414 [2024-11-17 02:57:48.808196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.414 [2024-11-17 02:57:48.808247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.414 qpair failed and we were unable to recover it. 00:37:40.414 [2024-11-17 02:57:48.808397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.414 [2024-11-17 02:57:48.808435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.414 qpair failed and we were unable to recover it. 00:37:40.414 [2024-11-17 02:57:48.808603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.414 [2024-11-17 02:57:48.808640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.414 qpair failed and we were unable to recover it. 00:37:40.414 [2024-11-17 02:57:48.808742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.414 [2024-11-17 02:57:48.808778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.414 qpair failed and we were unable to recover it. 00:37:40.414 [2024-11-17 02:57:48.808895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.414 [2024-11-17 02:57:48.808931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.414 qpair failed and we were unable to recover it. 00:37:40.414 [2024-11-17 02:57:48.809037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.414 [2024-11-17 02:57:48.809072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.414 qpair failed and we were unable to recover it. 00:37:40.414 [2024-11-17 02:57:48.809304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.414 [2024-11-17 02:57:48.809341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.414 qpair failed and we were unable to recover it. 00:37:40.414 [2024-11-17 02:57:48.809535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.414 [2024-11-17 02:57:48.809569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.414 qpair failed and we were unable to recover it. 00:37:40.414 [2024-11-17 02:57:48.809689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.414 [2024-11-17 02:57:48.809735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.414 qpair failed and we were unable to recover it. 00:37:40.414 [2024-11-17 02:57:48.809881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.414 [2024-11-17 02:57:48.809917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.414 qpair failed and we were unable to recover it. 00:37:40.414 [2024-11-17 02:57:48.810052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.414 [2024-11-17 02:57:48.810086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.414 qpair failed and we were unable to recover it. 00:37:40.414 [2024-11-17 02:57:48.810209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.414 [2024-11-17 02:57:48.810241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.695 qpair failed and we were unable to recover it. 00:37:40.695 [2024-11-17 02:57:48.810408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.695 [2024-11-17 02:57:48.810446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.695 qpair failed and we were unable to recover it. 00:37:40.695 [2024-11-17 02:57:48.810560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.695 [2024-11-17 02:57:48.810604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.695 qpair failed and we were unable to recover it. 00:37:40.695 [2024-11-17 02:57:48.810772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.695 [2024-11-17 02:57:48.810822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.695 qpair failed and we were unable to recover it. 00:37:40.695 [2024-11-17 02:57:48.810989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.695 [2024-11-17 02:57:48.811030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.695 qpair failed and we were unable to recover it. 00:37:40.695 [2024-11-17 02:57:48.811190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.695 [2024-11-17 02:57:48.811227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.695 qpair failed and we were unable to recover it. 00:37:40.695 [2024-11-17 02:57:48.811342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.695 [2024-11-17 02:57:48.811377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.695 qpair failed and we were unable to recover it. 00:37:40.695 [2024-11-17 02:57:48.811480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.695 [2024-11-17 02:57:48.811512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.695 qpair failed and we were unable to recover it. 00:37:40.695 [2024-11-17 02:57:48.811622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.695 [2024-11-17 02:57:48.811656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.695 qpair failed and we were unable to recover it. 00:37:40.695 [2024-11-17 02:57:48.811759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.695 [2024-11-17 02:57:48.811795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.695 qpair failed and we were unable to recover it. 00:37:40.695 [2024-11-17 02:57:48.811937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.695 [2024-11-17 02:57:48.811973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.695 qpair failed and we were unable to recover it. 00:37:40.695 [2024-11-17 02:57:48.812084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.695 [2024-11-17 02:57:48.812128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.812242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.812278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.812377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.812411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.812550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.812586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.812715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.812752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.812863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.812897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.813031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.813075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.813224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.813261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.813391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.813426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.813565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.813600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.813716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.813751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.813904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.813946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.814087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.814132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.814276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.814313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.814425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.814467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.814640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.814677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.814816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.814852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.814994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.815031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.815200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.815249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.815371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.815413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.815599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.815675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.815891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.815929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.816076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.816122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.816291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.816329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.816480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.816531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.816647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.816683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.816790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.816834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.816992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.817027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.817179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.817218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.817405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.817448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.817573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.817613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.817798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.817842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.817991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.818027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.818162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.818198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.818335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.818373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.818537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.818593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.696 [2024-11-17 02:57:48.818799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.696 [2024-11-17 02:57:48.818855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.696 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.818987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.819040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.819237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.819273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.819434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.819478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.819593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.819628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.819775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.819850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.819984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.820030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.820232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.820292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.820416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.820454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.820605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.820641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.820776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.820812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.820976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.821019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.821146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.821200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.821325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.821362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.821498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.821541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.821674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.821707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.821820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.821870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.822054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.822089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.822241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.822275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.822407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.822444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.822583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.822636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.822811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.822871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.823033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.823094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.823224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.823258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.823391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.823425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.823567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.823603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.823736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.823787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.823951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.823986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.824127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.824174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.824336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.824371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.824485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.824520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.824636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.824671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.824829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.824868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.825014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.697 [2024-11-17 02:57:48.825053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.697 qpair failed and we were unable to recover it. 00:37:40.697 [2024-11-17 02:57:48.825247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.825284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.825394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.825434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.825584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.825618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.825751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.825786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.825910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.825955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.826120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.826184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.826322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.826357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.826475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.826511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.826644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.826678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.826839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.826885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.827043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.827078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.827235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.827285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.827431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.827480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.827629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.827665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.827798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.827834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.827957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.827994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.828157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.828194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.828310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.828347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.828458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.828491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.828621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.828655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.828801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.828848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.828988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.829026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.829167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.829203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.829344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.829380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.829518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.829553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.829695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.829731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.829896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.829933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.830086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.830145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.830311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.830360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.830536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.830572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.830704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.830740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.830906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.830944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.831074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.831142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.831282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.831316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.831454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.831496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.831681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.831719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.831860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.831901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.832022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.832061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.698 qpair failed and we were unable to recover it. 00:37:40.698 [2024-11-17 02:57:48.832259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.698 [2024-11-17 02:57:48.832295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.832403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.832439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.832600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.832635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.832820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.832885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.833068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.833118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.833273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.833322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.833516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.833554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.833696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.833732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.833870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.833906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.834021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.834059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.834189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.834226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.834362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.834397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.834529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.834583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.834735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.834773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.834927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.834966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.835093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.835139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.835274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.835309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.835486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.835522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.835651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.835690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.835836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.835874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.835997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.836034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.836183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.836219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.836358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.836393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.836578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.836614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.836782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.836828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.836993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.837049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.837223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.837260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.837427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.837462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.837629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.837692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.837829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.837898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.838078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.838125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.838267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.838303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.838437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.838473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.838621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.838659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.838799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.838868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.839035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.839081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.839248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.699 [2024-11-17 02:57:48.839285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.699 qpair failed and we were unable to recover it. 00:37:40.699 [2024-11-17 02:57:48.839454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.839490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.839608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.839645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.839858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.839915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.840069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.840123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.840262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.840299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.840437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.840476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.840610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.840644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.840787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.840821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.841012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.841052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.841233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.841283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.841441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.841479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.841575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.841608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.841746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.841781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.841915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.841952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.842090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.842132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.842283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.842320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.842478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.842513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.842696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.842761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.842918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.842961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.843142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.843177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.843289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.843323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.843466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.843501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.843637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.843670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.843790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.843823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.843995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.844030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.844152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.844188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.844298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.700 [2024-11-17 02:57:48.844333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.700 qpair failed and we were unable to recover it. 00:37:40.700 [2024-11-17 02:57:48.844463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.844499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.844635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.844669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.844811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.844854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.844996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.845029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.845191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.845242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.845363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.845402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.845538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.845581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.845718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.845754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.845912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.845947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.846089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.846132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.846242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.846276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.846412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.846448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.846584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.846626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.846847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.846908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.847037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.847072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.847216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.847251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.847392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.847429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.847541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.847575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.847705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.847745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.847878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.847914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.848054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.848089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.848212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.848247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.848374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.848408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.848509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.848544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.848726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.848764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.848898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.848935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.849035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.849078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.849220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.849255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.849362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.849396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.849533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.849574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.849696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.849730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.849894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.849928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.850090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.850134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.850302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.850340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.850476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.850511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.850650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.701 [2024-11-17 02:57:48.850685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.701 qpair failed and we were unable to recover it. 00:37:40.701 [2024-11-17 02:57:48.850845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.850879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.851020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.851055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.851198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.851234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.851400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.851436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.851576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.851613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.851760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.851795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.851963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.852000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.852184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.852220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.852364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.852400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.852503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.852537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.852675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.852714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.852865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.852901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.853015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.853051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.853225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.853261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.853392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.853426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.853563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.853598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.853730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.853768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.853913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.853951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.854061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.854107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.854244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.854278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.854385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.854418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.854519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.854555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.854694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.854728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.854885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.854920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.855106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.855148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.855295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.855334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.855450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.855485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.855645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.855681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.855817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.855852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.855970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.856007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.856145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.856194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.856347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.856397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.856523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.856562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.856671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.856708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.856821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.856857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.857020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.857055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.702 qpair failed and we were unable to recover it. 00:37:40.702 [2024-11-17 02:57:48.857177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.702 [2024-11-17 02:57:48.857214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.857362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.857399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.857550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.857585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.857713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.857748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.857910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.857946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.858047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.858082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.858239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.858275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.858418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.858455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.858615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.858649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.858791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.858827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.858992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.859027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.859171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.859208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.859313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.859349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.859457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.859493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.859653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.859694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.859831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.859867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.859972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.860004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.860122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.860163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.860305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.860339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.860495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.860533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.860696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.860731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.860829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.860861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.860978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.861028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.861122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.861156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.861316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.861353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.861521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.861556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.861670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.861705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.861814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.861850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.861961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.861995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.862111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.862146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.862277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.862310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.862460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.862497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.862660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.862696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.862860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.862895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.863030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.863065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.863199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.703 [2024-11-17 02:57:48.863233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.703 qpair failed and we were unable to recover it. 00:37:40.703 [2024-11-17 02:57:48.863349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.863386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.863536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.863572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.863709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.863744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.863879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.863915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.864081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.864136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.864298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.864337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.864511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.864548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.864686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.864721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.864856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.864891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.864990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.865024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.865165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.865200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.865331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.865369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.865508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.865542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.865669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.865703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.865846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.865887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.866021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.866059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.866210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.866248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.866420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.866455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.866564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.866605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.866742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.866776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.866936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.866970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.867110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.867154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.867295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.867329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.867464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.867497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.867632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.867676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.867799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.867834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.867969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.868004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.868140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.868185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.868327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.868362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.868471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.868505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.868668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.868703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.868845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.704 [2024-11-17 02:57:48.868879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.704 qpair failed and we were unable to recover it. 00:37:40.704 [2024-11-17 02:57:48.869022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.869055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.869223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.869274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.869408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.869458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.869632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.869670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.869832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.869891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.870066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.870121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.870250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.870284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.870407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.870442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.870577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.870612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.870742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.870776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.870874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.870908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.871046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.871087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.871213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.871251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.871379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.871430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.871548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.871585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.871723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.871759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.871898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.871935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.872074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.872121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.872265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.872299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.872485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.872535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.872658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.872697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.872851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.872916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.873041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.873081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.873261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.873297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.873406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.873441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.873600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.873636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.873747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.873786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.873946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.873981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.874178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.874214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.874371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.874424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.874600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.705 [2024-11-17 02:57:48.874638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.705 qpair failed and we were unable to recover it. 00:37:40.705 [2024-11-17 02:57:48.874776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.874812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.874920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.874956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.875084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.875129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.875287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.875329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.875476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.875511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.875612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.875646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.875743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.875779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.875888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.875942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.876108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.876143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.876286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.876336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.876471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.876510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.876626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.876662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.876797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.876833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.876971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.877006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.877147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.877184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.877330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.877367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.877485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.877525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.877668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.877704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.877841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.877877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.878041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.878092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.878247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.878287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.878438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.878476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.878619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.878655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.878816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.878851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.878975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.879012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.879147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.879195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.879303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.879339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.879493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.879531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.879676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.879714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.879849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.879885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.880017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.880052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.880195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.880232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.880369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.880403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.706 [2024-11-17 02:57:48.880536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.706 [2024-11-17 02:57:48.880578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.706 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.880720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.880757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.880892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.880933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.881073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.881120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.881289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.881325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.881462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.881505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.881644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.881678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.881840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.881878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.882042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.882077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.882201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.882234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.882396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.882430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.882577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.882611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.882747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.882783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.882961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.883010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.883167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.883206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.883345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.883381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.883520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.883556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.883690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.883726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.883862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.883901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.884025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.884061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.884184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.884219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.884349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.884384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.884643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.884682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.884871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.884907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.885019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.885052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.885202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.885238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.885386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.885421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.885581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.885615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.885777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.885815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.885996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.886066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.886235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.886295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.886476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.886512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.886768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.886828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.886982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.887020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.887180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.887217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.887357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.887392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.887491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.887524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.707 [2024-11-17 02:57:48.887656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.707 [2024-11-17 02:57:48.887708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.707 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.887874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.887930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.888107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.888146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.888294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.888331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.888437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.888473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.888603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.888645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.888784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.888819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.888960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.888997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.889162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.889197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.889338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.889373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.889486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.889521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.889688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.889751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.889874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.889912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.890063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.890106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.890248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.890283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.890464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.890520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.890673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.890730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.890918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.890958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.891076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.891152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.891273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.891314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.891486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.891525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.891672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.891710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.891933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.891972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.892128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.892163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.892304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.892346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.892488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.892523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.892676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.892714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.892889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.892928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.893140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.893192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.893313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.893352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.893488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.893558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.893676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.893716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.893976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.894054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.894219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.894256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.894395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.894432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.708 [2024-11-17 02:57:48.894573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.708 [2024-11-17 02:57:48.894608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.708 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.894763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.894803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.894931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.894972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.895145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.895181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.895308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.895345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.895529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.895570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.895724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.895764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.895886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.895941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.896057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.896106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.896262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.896297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.896455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.896502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.896655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.896695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.896849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.896889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.897071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.897114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.897275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.897310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.897439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.897474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.897616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.897655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.897778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.897817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.898000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.898039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.898186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.898222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.898355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.898403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.898532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.898580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.898790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.898849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.898978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.899016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.899177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.899220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.899383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.899418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.899559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.899612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.899752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.899790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.899934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.899972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.900113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.900165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.900263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.900298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.900472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.900509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.900651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.900686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.900833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.900872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.901008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.901049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.901174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.709 [2024-11-17 02:57:48.901209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.709 qpair failed and we were unable to recover it. 00:37:40.709 [2024-11-17 02:57:48.901316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.901351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.901532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.901600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.901785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.901841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.902028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.902070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.902212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.902249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.902386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.902424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.902598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.902637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.902752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.902790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.902936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.902975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.903187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.903247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.903445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.903494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.903662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.903704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.903821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.903862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.904024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.904063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.904233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.904278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.904456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.904498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.904672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.904729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.904849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.904891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.905064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.905107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.905272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.905308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.905411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.905447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.905581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.905622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.905773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.905812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.905926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.905966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.906174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.906213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.906375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.906428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.906621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.906678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.906824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.906896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.907054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.907090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.907272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.907325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.907503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.710 [2024-11-17 02:57:48.907564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.710 qpair failed and we were unable to recover it. 00:37:40.710 [2024-11-17 02:57:48.907772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.907834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.907992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.908033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.908234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.908289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.908464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.908534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.908672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.908729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.908852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.908891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.909046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.909086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.909228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.909264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.909380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.909416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.909544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.909597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.909846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.909923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.910110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.910151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.910265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.910301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.910452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.910489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.910659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.910699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.910893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.910969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.911173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.911212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.911375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.911410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.911559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.911627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.911759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.911797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.912026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.912064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.912227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.912277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.912425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.912480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.912660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.912706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.912886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.912926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.913054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.913122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.913259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.913316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.913433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.913472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.913591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.913631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.913755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.913795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.913953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.913993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.914159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.914197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.914339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.914374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.914508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.711 [2024-11-17 02:57:48.914542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.711 qpair failed and we were unable to recover it. 00:37:40.711 [2024-11-17 02:57:48.914681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.914723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.914858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.914896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.915045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.915084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.915262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.915299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.915424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.915463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.915614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.915652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.915788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.915827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.916012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.916052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.916203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.916242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.916356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.916409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.916642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.916681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.916852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.916892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.917064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.917109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.917286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.917326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.917494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.917528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.917659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.917692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.917837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.917876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.918059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.918123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.918265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.918303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.918483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.918523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.918700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.918746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.918922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.918961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.919121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.919157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.919290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.919325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.919427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.919479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.919608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.919665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.919819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.919859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.920042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.920081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.920220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.920256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.920398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.920441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.920606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.920654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.920795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.920831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.920997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.921032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.921173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.921223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.921350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.921389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.921531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.921568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.921735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.921801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.921958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.712 [2024-11-17 02:57:48.921995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.712 qpair failed and we were unable to recover it. 00:37:40.712 [2024-11-17 02:57:48.922132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.922186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.922315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.922356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.922516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.922552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.922689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.922725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.922830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.922867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.923009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.923043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.923190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.923227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.923372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.923407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.923537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.923572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.923739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.923775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.923945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.923981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.924120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.924156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.924289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.924325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.924490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.924526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.924640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.924676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.924796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.924836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.925013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.925057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.925217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.925253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.925395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.925432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.925542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.925577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.925715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.925751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.925862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.925899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.926071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.926113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.926228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.926265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.926415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.926450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.926558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.926594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.926733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.926768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.926917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.926953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.927114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.927163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.927356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.927404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.927541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.927602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.927754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.927814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.928009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.713 [2024-11-17 02:57:48.928070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.713 qpair failed and we were unable to recover it. 00:37:40.713 [2024-11-17 02:57:48.928276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.928317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.928460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.928497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.928679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.928733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.928894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.928951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.929108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.929163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.929269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.929305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.929414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.929449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.929582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.929617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.929773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.929813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.930006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.930048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.930226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.930265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.930432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.930469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.930608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.930646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.930805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.930844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.931011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.931052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.931225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.931262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.931418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.931463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.931623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.931662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.931769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.931806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.931983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.932022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.932190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.932225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.932365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.932416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.932537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.932592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.932757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.932809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.933002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.933042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.933183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.933225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.933357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.933393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.933507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.933543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.933705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.933741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.933891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.933929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.934067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.934120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.934308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.934358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.934525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.934566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.934713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.714 [2024-11-17 02:57:48.934753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.714 qpair failed and we were unable to recover it. 00:37:40.714 [2024-11-17 02:57:48.934932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.934971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.935100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.935135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.935371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.935449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.935624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.935665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.935870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.935909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.936089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.936159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.936317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.936367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.936507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.936545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.936675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.936711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.936851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.936886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.937030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.937065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.937253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.937303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.937478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.937534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.937685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.937722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.937856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.937896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.938040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.938075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.938287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.938321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.938437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.938474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.938597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.938633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.938772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.938807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.938915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.938951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.939091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.939137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.939289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.939326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.939445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.939481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.939621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.939657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.939791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.939826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.939938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.939975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.940113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.940176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.715 qpair failed and we were unable to recover it. 00:37:40.715 [2024-11-17 02:57:48.940307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.715 [2024-11-17 02:57:48.940343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.940478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.940514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.940643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.940678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.940818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.940861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.941025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.941061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.941237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.941274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.941381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.941415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.941550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.941586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.941730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.941766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.941908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.941943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.942052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.942090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.942290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.942326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.942438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.942473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.942604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.942639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.942770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.942804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.942963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.942998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.943136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.943181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.943324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.943363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.943499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.943534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.943707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.943746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.943918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.943956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.944126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.944162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.944295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.944331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.944462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.944497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.944635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.944670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.944810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.944847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.944991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.945035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.945163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.945200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.945342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.945376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.945513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.945548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.945651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.945686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.945791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.945828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.945947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.945985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.946126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.946163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.946277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.946312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.946445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.946479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.946644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.946679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.716 [2024-11-17 02:57:48.946793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.716 [2024-11-17 02:57:48.946830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.716 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.946985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.947022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.947145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.947192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.947308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.947343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.947503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.947538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.947643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.947678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.947816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.947862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.947998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.948047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.948213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.948251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.948398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.948433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.948541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.948574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.948705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.948740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.948906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.948943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.949040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.949076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.949196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.949231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.949372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.949414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.949542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.949577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.949704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.949739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.949901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.949942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.950124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.950191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.950345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.950383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.950498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.950536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.950708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.950744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.950839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.950875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.950990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.951028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.951209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.951248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.951390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.951426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.951520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.951552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.951690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.951753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.951890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.951926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.952086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.952144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.952307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.952344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.952483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.952518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.952687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.952723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.952876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.952912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.953072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.717 [2024-11-17 02:57:48.953113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.717 qpair failed and we were unable to recover it. 00:37:40.717 [2024-11-17 02:57:48.953247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.953282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.953414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.953465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.953585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.953624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.953796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.953833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.953989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.954028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.954225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.954261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.954372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.954413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.954623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.954659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.954823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.954858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.954986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.955021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.955131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.955171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.955335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.955374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.955538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.955576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.955743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.955789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.955949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.955984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.956147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.956182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.956288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.956327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.956469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.956504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.956612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.956647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.956792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.956829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.956980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.957015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.957203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.957254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.957402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.957440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.957585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.957621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.957758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.957794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.957934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.957970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.958078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.958123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.958316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.958360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.958470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.958506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.958668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.958702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.958869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.958906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.959064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.959122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.959295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.959333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.959500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.718 [2024-11-17 02:57:48.959536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.718 qpair failed and we were unable to recover it. 00:37:40.718 [2024-11-17 02:57:48.959650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.959686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.959796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.959832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.959947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.959985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.960136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.960173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.960305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.960340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.960445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.960481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.960622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.960657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.960788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.960823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.960947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.960985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.961094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.961136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.961277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.961312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.961448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.961482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.961611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.961646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.961785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.961825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.961966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.962001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.962122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.962160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.962321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.962363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.962475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.962511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.962671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.962707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.962811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.962847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.962972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.963022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.963165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.963205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.963324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.963359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.963485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.963520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.963616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.963655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.963784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.963819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.963961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.963997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.964147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.964188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.964326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.964362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.964544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.964587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.964768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.964830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.965024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.965059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.965248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.719 [2024-11-17 02:57:48.965285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.719 qpair failed and we were unable to recover it. 00:37:40.719 [2024-11-17 02:57:48.965509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.965544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.965685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.965734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.965852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.965885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.966022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.966061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.966191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.966225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.966358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.966395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.966580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.966622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.966780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.966814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.966917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.966950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.967089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.967141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.967308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.967343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.967553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.967595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.967695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.967729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.967898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.967935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.968130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.968181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.968354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.968403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.968587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.968624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.968765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.968800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.968950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.968988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.969140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.969176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.969315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.969355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.969471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.969506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.969610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.969643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.969739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.969779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.969936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.969986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.970112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.970151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.970318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.970355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.970497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.970534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.970642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.970679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.970808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.970859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.971055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.971090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.971208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.720 [2024-11-17 02:57:48.971245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.720 qpair failed and we were unable to recover it. 00:37:40.720 [2024-11-17 02:57:48.971413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.971451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.971573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.971610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.971777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.971813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.971952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.971987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.972131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.972167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.972314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.972350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.972568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.972602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.972723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.972777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.972903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.972937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.973074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.973118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.973233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.973270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.973379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.973417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.973557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.973593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.973704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.973743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.973883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.973918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.974055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.974090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.974228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.974263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.974381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.974416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.974557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.974592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.974730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.974770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.974907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.974942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.975058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.975092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.975272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.975310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.975447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.975483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.975610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.975645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.975808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.975844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.975983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.976020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.976154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.976190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.976306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.976342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.976477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.976520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.976625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.976658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.976774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.976813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.976919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.976954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.977109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.977146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.977365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.721 [2024-11-17 02:57:48.977401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.721 qpair failed and we were unable to recover it. 00:37:40.721 [2024-11-17 02:57:48.977623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.977659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.977822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.977858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.978003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.978040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.978196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.978233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.978394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.978430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.978563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.978598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.978702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.978738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.978856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.978892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.979059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.979107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.979224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.979258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.979378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.979413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.979560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.979615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.979745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.979786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.979912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.979947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.980064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.980110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.980249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.980284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.980422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.980457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.980591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.980627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.980787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.980822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.980926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.980961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.981093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.981138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.981273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.981308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.981419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.981453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.981567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.981601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.981733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.981768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.981868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.981903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.982033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.982070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.982213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.982248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.982353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.982388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.982549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.982584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.982749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.982785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.982926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.982961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.983106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.983144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.983250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.983285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.983447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.983483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.983637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.983676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.983805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.983844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.983976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.722 [2024-11-17 02:57:48.984010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.722 qpair failed and we were unable to recover it. 00:37:40.722 [2024-11-17 02:57:48.984146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.984183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.984312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.984348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.984479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.984515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.984643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.984678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.984828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.984864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.985007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.985056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.985225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.985262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.985393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.985429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.985572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.985608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.985710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.985744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.985876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.985912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.986044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.986080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.986232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.986268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.986377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.986412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.986554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.986589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.986687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.986723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.986885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.986920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.987060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.987103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.987233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.987268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.987430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.987465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.987610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.987645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.987776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.987811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.987943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.987997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.988141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.988182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.988355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.988392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.988554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.988593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.988772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.988812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.988976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.989030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.989215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.989262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.989408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.989458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.989596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.989662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.989780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.989827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.989963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.990002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.990184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.990219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.990328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.990364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.723 [2024-11-17 02:57:48.990538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.723 [2024-11-17 02:57:48.990578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.723 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.990719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.990779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.990919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.990972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.991112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.991169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.991331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.991384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.991540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.991574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.991756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.991797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.991922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.991960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.992122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.992158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.992294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.992329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.992466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.992518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.992693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.992740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.992900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.992939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.993052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.993091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.993237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.993274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.993451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.993486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.993588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.993641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.993761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.993803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.993940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.993979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.994151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.994188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.994302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.994345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.994483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.994518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.994702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.994741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.994897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.994951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.995087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.995134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.995277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.995312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.995510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.995549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.995690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.995728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.995853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.995892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.996042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.996093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.996309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.996371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.724 [2024-11-17 02:57:48.996543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.724 [2024-11-17 02:57:48.996600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.724 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:48.996769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:48.996809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:48.996994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:48.997039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:48.997241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:48.997291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:48.997443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:48.997481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:48.997628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:48.997664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:48.997792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:48.997831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:48.997998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:48.998035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:48.998169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:48.998205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:48.998342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:48.998378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:48.998529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:48.998569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:48.998742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:48.998780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:48.998888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:48.998934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:48.999074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:48.999125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:48.999262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:48.999298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:48.999429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:48.999464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:48.999582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:48.999637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:48.999865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:48.999903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:49.000141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:49.000178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:49.000299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:49.000335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:49.000480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:49.000534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:49.000670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:49.000708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:49.000881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:49.000926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:49.001078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:49.001145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:49.001309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:49.001360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:49.001524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:49.001567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:49.001755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:49.001795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:49.001947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:49.001986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:49.002150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:49.002186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:49.002325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:49.002361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:49.002492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:49.002528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.725 qpair failed and we were unable to recover it. 00:37:40.725 [2024-11-17 02:57:49.002665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.725 [2024-11-17 02:57:49.002700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.002835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.002902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.003126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.003166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.003304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.003339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.003461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.003499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.003710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.003777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.003947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.004005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.004131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.004168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.004283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.004320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.004484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.004532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.004658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.004711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.004858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.004896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.005047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.005086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.005338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.005373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.005567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.005606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.005730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.005768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.005898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.005932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.006124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.006191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.006364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.006402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.006535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.006575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.006693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.006732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.006881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.006926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.007114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.007151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.007252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.007286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.007426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.007461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.007597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.007637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.007753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.007794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.007908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.007948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.008128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.008179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.008330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.008384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.008543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.008581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.008683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.008717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.008841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.008891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.009070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.009120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.726 qpair failed and we were unable to recover it. 00:37:40.726 [2024-11-17 02:57:49.009283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.726 [2024-11-17 02:57:49.009320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.009457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.009497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.009640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.009679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.009874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.009940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.010127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.010163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.010277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.010313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.010469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.010507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.010732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.010771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.010917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.010956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.011122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.011180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.011303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.011345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.011513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.011577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.011768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.011822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.011935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.011976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.012146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.012201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.012345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.012381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.012577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.012618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.012768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.012808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.012972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.013012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.013225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.013265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.013381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.013420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.013567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.013607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.013721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.013761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.013931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.013969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.014166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.014223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.014379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.014434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.014617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.014672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.014811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.014853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.014999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.015041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.015161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.015198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.015313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.015355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.015521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.015556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.015793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.015835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.015994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.727 [2024-11-17 02:57:49.016032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.727 qpair failed and we were unable to recover it. 00:37:40.727 [2024-11-17 02:57:49.016174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.016208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.016438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.016500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.016688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.016742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.016885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.016930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.017105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.017142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.017248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.017282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.017430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.017469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.017710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.017749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.017921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.017968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.018144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.018180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.018311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.018345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.018490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.018526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.018678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.018717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.018880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.018928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.019063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.019112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.019308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.019359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.019528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.019571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.019724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.019765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.019912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.019951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.020126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.020177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.020315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.020363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.020488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.020543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.020665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.020704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.020903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.020950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.021116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.021169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.021306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.021341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.021499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.021535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.021719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.021775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.021965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.022001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.022138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.022173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.022307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.022341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.022565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.022601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.022791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.022834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.022971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.023016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.023140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.023175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.023352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.023393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.023502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.023556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.023728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.728 [2024-11-17 02:57:49.023766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.728 qpair failed and we were unable to recover it. 00:37:40.728 [2024-11-17 02:57:49.023939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.024014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.024149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.024188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.024352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.024425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.024613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.024671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.024861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.024922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.025112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.025148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.025260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.025294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.025431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.025490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.025701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.025745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.025942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.025982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.026135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.026201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.026350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.026388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.026507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.026545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.026657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.026694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.026850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.026902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.027084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.027166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.027310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.027348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.027476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.027521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.027632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.027670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.027819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.027858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.028011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.028051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.028173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.028209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.028336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.028386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.028613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.028653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.028810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.028849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.028966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.029021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.029190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.029226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.029389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.029424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.029534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.029569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.029766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.029824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.029990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.030025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.030186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.030222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.030383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.729 [2024-11-17 02:57:49.030433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.729 qpair failed and we were unable to recover it. 00:37:40.729 [2024-11-17 02:57:49.030634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.030699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.030808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.030845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.030984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.031030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.031254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.031311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.031504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.031568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.031741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.031800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.031983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.032019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.032191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.032248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.032417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.032469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.032626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.032664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.032795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.032836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.032958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.033005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.033210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.033247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.033386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.033441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.033551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.033586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.033744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.033798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.033912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.033949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.034070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.034145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.034267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.034303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.034443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.034483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.034608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.034644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.034809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.034844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.034984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.035021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.035154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.035189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.035336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.035373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.035575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.035615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.035846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.035889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.036083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.036149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.036331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.036382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.730 qpair failed and we were unable to recover it. 00:37:40.730 [2024-11-17 02:57:49.036585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.730 [2024-11-17 02:57:49.036634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.036825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.036866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.036997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.037049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.037221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.037258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.037445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.037486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.037677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.037735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.037965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.038004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.038167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.038202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.038361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.038415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.038570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.038605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.038796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.038864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.039007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.039058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.039210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.039246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.039348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.039398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.039526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.039565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.039736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.039776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.039934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.039991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.040152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.040188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.040296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.040330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.040464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.040517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.040664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.040703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.040953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.040991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.041129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.041196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.041356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.041407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.041533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.041571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.041712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.041767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.041944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.041982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.042151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.042188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.042307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.042344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.042479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.042514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.042736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.042774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.042950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.042988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.043178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.043228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.043374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.043412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.043557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.731 [2024-11-17 02:57:49.043612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.731 qpair failed and we were unable to recover it. 00:37:40.731 [2024-11-17 02:57:49.043826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.043884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.044047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.044082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.044203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.044239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.044372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.044407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.044590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.044630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.044779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.044825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.045020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.045056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.045174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.045218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.045354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.045389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.045546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.045606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.045763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.045817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.045988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.046027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.046153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.046205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.046338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.046372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.046509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.046564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.046790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.046827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.046987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.047021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.047177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.047213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.047322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.047365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.047575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.047643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.047784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.047822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.047988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.048022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.048142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.048177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.048314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.048354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.048510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.048548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.048750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.048847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.049014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.049069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.049226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.049265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.049468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.049518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.049758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.049796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.049971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.050009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.050179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.050215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.050352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.050391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.050623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.050662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.050771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.050810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.051004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.051043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.051210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.051260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.732 qpair failed and we were unable to recover it. 00:37:40.732 [2024-11-17 02:57:49.051414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.732 [2024-11-17 02:57:49.051469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.051639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.051701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.051947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.052016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.052210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.052247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.052384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.052420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.052591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.052693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.052867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.052907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.053038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.053078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.053216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.053257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.053409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.053445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.053585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.053620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.053735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.053786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.053930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.053967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.054152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.054207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.054368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.054409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.054553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.054594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.054803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.054839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.054965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.055006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.055170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.055206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.055342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.055389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.055509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.055550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.055697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.055737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.055957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.056012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.056199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.056253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.056414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.056466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.056589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.056646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.056747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.056784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.056918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.056960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.057104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.057157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.057291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.057345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.057616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.057678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.057872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.057909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.058053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.058088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.058266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.058321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.058556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.058613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.733 [2024-11-17 02:57:49.058777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.733 [2024-11-17 02:57:49.058835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.733 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.058972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.059016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.059198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.059251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.059405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.059468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.059613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.059666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.059805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.059845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.060007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.060044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.060179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.060248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.060416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.060457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.060628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.060687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.060804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.060843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.061005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.061041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.061188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.061243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.061437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.061499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.061720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.061777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.061976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.062033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.062227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.062267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.062474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.062514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.062741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.062809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.062960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.062997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.063172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.063225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.063374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.063426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.063602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.063637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.063788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.063835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.063982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.064028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.064149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.064184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.064361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.064412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.064571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.064618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.064786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.064826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.065002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.065041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.065232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.065267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.065421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.065471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.065647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.065686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.065856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.065896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.066055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.066092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.066248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.066284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.734 [2024-11-17 02:57:49.066453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.734 [2024-11-17 02:57:49.066491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.734 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.066655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.066709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.066861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.066914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.067045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.067080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.067278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.067314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.067518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.067580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.067706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.067766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.067940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.067976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.068093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.068140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.068321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.068387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.068562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.068614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.068742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.068785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.068938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.068996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.069143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.069192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.069341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.069395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.069566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.069614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.069810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.069849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.070046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.070090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.070279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.070337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.070450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.070488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.070650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.070705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.070889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.070925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.071116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.071159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.071297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.071352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.071485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.071525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.071701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.071756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.071931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.735 [2024-11-17 02:57:49.071981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.735 qpair failed and we were unable to recover it. 00:37:40.735 [2024-11-17 02:57:49.072114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.072192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.072347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.072410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.072628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.072687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.072865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.072901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.073017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.073055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.073234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.073271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.073449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.073504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.073690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.073752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.073862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.073903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.074068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.074110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.074238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.074273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.074411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.074447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.074655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.074713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.074911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.074951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.075111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.075165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.075306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.075343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.075480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.075516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.075626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.075681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.075833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.075872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.076021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.076061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.076218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.076268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.076413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.076470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.076662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.076716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.076858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.076915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.077042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.077077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.077285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.077338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.077480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.077536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.077730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.077787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.077905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.077939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.078106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.078151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.078285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.078336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.078489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.078529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.078701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.078764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.078974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.079035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.079204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.079240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.079386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.079445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.079589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.736 [2024-11-17 02:57:49.079643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.736 qpair failed and we were unable to recover it. 00:37:40.736 [2024-11-17 02:57:49.079813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.079881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.080108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.080181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.080292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.080330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.080502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.080538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.080641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.080694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.080829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.080869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.081026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.081066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.081262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.081312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.081494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.081535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.081722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.081784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.081972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.082011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.082221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.082271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.082386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.082435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.082622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.082687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.082958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.083017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.083190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.083226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.083366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.083402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.083540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.083576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.083800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.083859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.084001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.084038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.084208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.084258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.084408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.084444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.084582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.084617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.084777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.084816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.084940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.084979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.085160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.085210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.085383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.085421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.085575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.085629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.085821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.085874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.086020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.086056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.086225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.086275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.086434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.086475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.086715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.086789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.086994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.087040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.087178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.087216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.087384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.087420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.737 qpair failed and we were unable to recover it. 00:37:40.737 [2024-11-17 02:57:49.087657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.737 [2024-11-17 02:57:49.087697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.087840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.087904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.088031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.088084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.088256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.088293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.088484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.088553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.088787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.088848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.089035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.089071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.089222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.089257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.089414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.089453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.089660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.089758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.089875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.089922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.090135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.090186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.090317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.090366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.090555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.090611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.090852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.090910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.091069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.091133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.091267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.091304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.091419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.091471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.091656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.091694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.091877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.091946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.092093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.092156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.092295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.092330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.092511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.092578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.092715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.092758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.092947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.092984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.093127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.093163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.093343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.093399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.093550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.093603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.093798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.093838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.093990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.094036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.094180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.094215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.094352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.094388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.094537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.094573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.094708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.094742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.094906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.094940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.095109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.738 [2024-11-17 02:57:49.095160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.738 qpair failed and we were unable to recover it. 00:37:40.738 [2024-11-17 02:57:49.095292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.095342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.095515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.095559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.095720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.095756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.095887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.095923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.096059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.096101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.096238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.096275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.096431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.096485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.096617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.096669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.096860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.096894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.097023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.097058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.097223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.097273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.097456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.097494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.097624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.097671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.097788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.097824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.097961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.098003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.098166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.098217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.098418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.098473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.098602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.098669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.098769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.098804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.098905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.098939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.099111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.099161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.099308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.099346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.099507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.099545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.099793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.099855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.100011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.100050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.100225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.100261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.100433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.100500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.100659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.100716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.100862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.100916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.101058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.101093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.101259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.101309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.101503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.101545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.101721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.101762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.101925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.101990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.102192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.102229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.102360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.102399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.739 [2024-11-17 02:57:49.102584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.739 [2024-11-17 02:57:49.102624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.739 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.102774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.102813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.103013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.103063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.103182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.103218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.103371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.103421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.103546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.103591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.103798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.103864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.104027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.104063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.104221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.104258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.104428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.104463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.104651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.104690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.104903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.104955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.105105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.105140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.105277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.105323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.105518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.105582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.105791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.105853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.106012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.106060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.106238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.106280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.106423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.106458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.106615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.106650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.106854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.106908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.107065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.107107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.107295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.107345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.107511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.107553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.107683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.107736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.107845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.107884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.108058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.108110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.108266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.108301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.108410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.108465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.108605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.108643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.108845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.108912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.109030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.109067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.109199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.109269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.109416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.109460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.109664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.740 [2024-11-17 02:57:49.109731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.740 qpair failed and we were unable to recover it. 00:37:40.740 [2024-11-17 02:57:49.109846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.109886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.110065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.110113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.110278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.110323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.110510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.110568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.110701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.110739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.110877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.110916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.111070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.111115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.111246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.111281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.111442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.111482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.111683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.111721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.111873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.111917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.112092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.112141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.112313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.112363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.112535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.112596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.112731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.112784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.112914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.112953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.113093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.113155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.113291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.113327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.113498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.113548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.113692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.113730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.113878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.113917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.114061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.114110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.114257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.114308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.114540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.114597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.114771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.114827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.114968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.115005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.115186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.115242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.115396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.115430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.115592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.115626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.115760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.115797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.115962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.115998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.116157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.116207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.116371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.116421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.116561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.116600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.116715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.116751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.116893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.116929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.117036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.117072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.117248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.117303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.117466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.117501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.741 qpair failed and we were unable to recover it. 00:37:40.741 [2024-11-17 02:57:49.117658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.741 [2024-11-17 02:57:49.117710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.117850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.117885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.118030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.118066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.118242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.118297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.118453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.118494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.118708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.118776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.118924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.118964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.119113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.119169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.119275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.119329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.119542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.119599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.119815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.119884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.120028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.120074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.120247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.120285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.120394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.120427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.120586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.120640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.120816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.120878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.121029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.121070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.121233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.121288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.121434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.121481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.121624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.121664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.121815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.121855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.122047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.122084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.122255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.122291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.122498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.122553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.122776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.122835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.123003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.123039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.123180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.123216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.123329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.123364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.123468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.123504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.123661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.123701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.123860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.123900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.124066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.124109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.124251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.124289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.124441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.124496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.124679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.124721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.124893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.742 [2024-11-17 02:57:49.124933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.742 qpair failed and we were unable to recover it. 00:37:40.742 [2024-11-17 02:57:49.125088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.125132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.125286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.125336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.125552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.125594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.125821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.125885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.126061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.126108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.126246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.126297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.126502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.126571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.126759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.126827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.126971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.127008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.127125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.127160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.127305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.127371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.127551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.127589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.127722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.127757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.127887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.127972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.128122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.128174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.128321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.128378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.128574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.128632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.128800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.128856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.128954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.128989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.129150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.129204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.129327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.129383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.129519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.129554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.129691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.129725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.129863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.129898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.130028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.130078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.130252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.130303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.130468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.130518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.130662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.130745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.130904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.130940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.131109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.131156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.131314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.131364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.131499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.131534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.131675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.131710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.131821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.131856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:40.743 [2024-11-17 02:57:49.131984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.743 [2024-11-17 02:57:49.132035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:40.743 qpair failed and we were unable to recover it. 00:37:41.026 [2024-11-17 02:57:49.132177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.026 [2024-11-17 02:57:49.132228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.026 qpair failed and we were unable to recover it. 00:37:41.026 [2024-11-17 02:57:49.132374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.026 [2024-11-17 02:57:49.132412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.026 qpair failed and we were unable to recover it. 00:37:41.026 [2024-11-17 02:57:49.132574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.026 [2024-11-17 02:57:49.132613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.026 qpair failed and we were unable to recover it. 00:37:41.026 [2024-11-17 02:57:49.132751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.026 [2024-11-17 02:57:49.132827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.026 qpair failed and we were unable to recover it. 00:37:41.026 [2024-11-17 02:57:49.133023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.026 [2024-11-17 02:57:49.133086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.026 qpair failed and we were unable to recover it. 00:37:41.026 [2024-11-17 02:57:49.133267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.026 [2024-11-17 02:57:49.133323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.026 qpair failed and we were unable to recover it. 00:37:41.026 [2024-11-17 02:57:49.133473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.026 [2024-11-17 02:57:49.133542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.026 qpair failed and we were unable to recover it. 00:37:41.026 [2024-11-17 02:57:49.133819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.026 [2024-11-17 02:57:49.133878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.026 qpair failed and we were unable to recover it. 00:37:41.026 [2024-11-17 02:57:49.133992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.026 [2024-11-17 02:57:49.134028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.026 qpair failed and we were unable to recover it. 00:37:41.026 [2024-11-17 02:57:49.134239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.026 [2024-11-17 02:57:49.134294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.026 qpair failed and we were unable to recover it. 00:37:41.026 [2024-11-17 02:57:49.134430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.026 [2024-11-17 02:57:49.134471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.026 qpair failed and we were unable to recover it. 00:37:41.026 [2024-11-17 02:57:49.134653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.026 [2024-11-17 02:57:49.134716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.026 qpair failed and we were unable to recover it. 00:37:41.026 [2024-11-17 02:57:49.134895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.026 [2024-11-17 02:57:49.134959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.026 qpair failed and we were unable to recover it. 00:37:41.026 [2024-11-17 02:57:49.135122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.026 [2024-11-17 02:57:49.135176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.026 qpair failed and we were unable to recover it. 00:37:41.026 [2024-11-17 02:57:49.135312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.026 [2024-11-17 02:57:49.135366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.026 qpair failed and we were unable to recover it. 00:37:41.026 [2024-11-17 02:57:49.135550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.026 [2024-11-17 02:57:49.135589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.026 qpair failed and we were unable to recover it. 00:37:41.026 [2024-11-17 02:57:49.135707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.026 [2024-11-17 02:57:49.135746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.026 qpair failed and we were unable to recover it. 00:37:41.026 [2024-11-17 02:57:49.135892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.026 [2024-11-17 02:57:49.135931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.026 qpair failed and we were unable to recover it. 00:37:41.026 [2024-11-17 02:57:49.136116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.136155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.136298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.136353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.136512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.136554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.136780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.136842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.136997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.137033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.137144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.137179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.137290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.137326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.137505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.137544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.137719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.137780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.137925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.137965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.138137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.138185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.138357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.138399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.138587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.138629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.138835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.138898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.139058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.139104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.139250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.139287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.139435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.139471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.139637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.139689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.139829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.139865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.140057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.140106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.140247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.140287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.140470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.140526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.140664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.140702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.140913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.140948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.141080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.141123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.141241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.141292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.141480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.141551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.141846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.141908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.142061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.142110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.142238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.142279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.142468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.142539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.142737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.142796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.142919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.142959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.143128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.143168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.143356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.143406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.143621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.143684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.143851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.027 [2024-11-17 02:57:49.143902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.027 qpair failed and we were unable to recover it. 00:37:41.027 [2024-11-17 02:57:49.144063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.144110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.144248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.144283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.144441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.144481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.144778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.144838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.145033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.145069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.145215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.145251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.145396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.145432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.145571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.145607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.145742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.145777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.145947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.146015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.146163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.146201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.146310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.146343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.146500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.146554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.146808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.146867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.147038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.147074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.147204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.147239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.147366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.147442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.147680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.147755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.147872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.147924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.148103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.148160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.148297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.148332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.148529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.148593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.148825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.148887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.149000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.149039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.149228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.149264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.149369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.149405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.149523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.149558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.149742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.149806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.150005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.150042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.150184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.150220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.150407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.150482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.150668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.150734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.150957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.151024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.151138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.151192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.028 [2024-11-17 02:57:49.151319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.028 [2024-11-17 02:57:49.151362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.028 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.151510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.151549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.151672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.151711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.151886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.151924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.152049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.152088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.152257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.152292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.152425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.152460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.152608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.152646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.152760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.152798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.152920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.152972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.153082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.153129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.153252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.153287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.153431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.153465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.153590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.153644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.153815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.153852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.154044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.154105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.154239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.154274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.154427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.154465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.154664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.154761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.154937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.154976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.155138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.155173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.155329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.155379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.155526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.155582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.155835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.155891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.156048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.156106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.156225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.156260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.156418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.156452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.156560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.156614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.156841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.029 [2024-11-17 02:57:49.156904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.029 qpair failed and we were unable to recover it. 00:37:41.029 [2024-11-17 02:57:49.157039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.157076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.157192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.157231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.157397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.157433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.157551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.157587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.157722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.157761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.157936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.157976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.158152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.158203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.158345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.158401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.158646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.158682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.158815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.158874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.159003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.159037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.159249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.159302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.159514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.159570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.159728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.159809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.159959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.160008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.160220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.160257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.160446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.160502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.160697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.160758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.160947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.161000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.161172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.161209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.161316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.161370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.161514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.161553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.161730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.161771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.161979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.162035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.162186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.162224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.162353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.162399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.162545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.162585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.162759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.162798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.162916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.162952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.163119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.163156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.163292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.163327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.163598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.163663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.163852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.163918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.164080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.164142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.030 qpair failed and we were unable to recover it. 00:37:41.030 [2024-11-17 02:57:49.164301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.030 [2024-11-17 02:57:49.164336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.164472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.164506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.164657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.164707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.164854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.164893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.165047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.165105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.165241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.165281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.165482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.165545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.165699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.165741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.165893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.165932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.166127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.166165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.166306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.166345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.166482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.166518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.166653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.166690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.166951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.167011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.167138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.167194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.167398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.167462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.167601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.167657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.167833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.167872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.167996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.168046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.168175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.168211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.168396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.168466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.168634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.168690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.168895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.168934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.169055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.169101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.169268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.169304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.169414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.169448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.169588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.031 [2024-11-17 02:57:49.169643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.031 qpair failed and we were unable to recover it. 00:37:41.031 [2024-11-17 02:57:49.169780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.169836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.170038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.170080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.170223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.170260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.170395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.170430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.170580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.170620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.170840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.170881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.171027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.171067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.171195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.171231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.171340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.171375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.171533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.171568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.171785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.171824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.171976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.172020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.172184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.172229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.172523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.172564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.172830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.172923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.173072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.173116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.173264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.173300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.173429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.173467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.173606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.173646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.173785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.173836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.173986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.174036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.174210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.174249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.174398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.174464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.174667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.174706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.174837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.174872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.175042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.175084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.175273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.175313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.175485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.175540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.175814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.175882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.176040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.176075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.176195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.176230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.176351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.176401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.176602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.176644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.176859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.032 [2024-11-17 02:57:49.176918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.032 qpair failed and we were unable to recover it. 00:37:41.032 [2024-11-17 02:57:49.177064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.177111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.177267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.177302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.177489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.177529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.177732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.177794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.178022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.178060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.178234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.178270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.178429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.178468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.178664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.178735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.178853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.178890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.179084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.179167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.179325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.179374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.179634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.179689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.179936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.179980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.180149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.180186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.180324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.180361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.180581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.180654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.180934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.180996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.181173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.181210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.181349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.181402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.181546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.181584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.181712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.181763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.181889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.181930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.182075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.182121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.182277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.182312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.182408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.182457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.182720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.182777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.182959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.182998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.183168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.183206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.183421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.183474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.183735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.033 [2024-11-17 02:57:49.183794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.033 qpair failed and we were unable to recover it. 00:37:41.033 [2024-11-17 02:57:49.183931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.183969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.184083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.184130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.184285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.184335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.184474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.184511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.184706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.184777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.184936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.184976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.185108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.185148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.185284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.185320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.185454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.185508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.185709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.185768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.185982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.186018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.186157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.186194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.186341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.186377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.186505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.186557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.186662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.186700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.186855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.186910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.187018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.187053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.187190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.187225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.187391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.187430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.187579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.187617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.187775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.187814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.187950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.187986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.188138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.188189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.188374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.188423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.188588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.188628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.188781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.188820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.188965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.189004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.189133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.189169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.189338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.189374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.189560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.189599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.189765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.189803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.189959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.034 [2024-11-17 02:57:49.190009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.034 qpair failed and we were unable to recover it. 00:37:41.034 [2024-11-17 02:57:49.190169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.190220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.190383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.190433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.190664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.190703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.190893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.190954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.191121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.191158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.191318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.191359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.191516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.191583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.191855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.191918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.192079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.192125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.192261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.192297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.192413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.192448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.192593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.192631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.192848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.192889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.193054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.193093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.193228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.193262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.193416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.193487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.193675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.193716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.193924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.193963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.194084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.194147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.194294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.194344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.194480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.194529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.194791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.194849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.194969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.195006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.195158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.195195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.195335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.195390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.195545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.195610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.195836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.195894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.196050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.196087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.196207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.196241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.196397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.196432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.196606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.196660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.196855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.196898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.035 qpair failed and we were unable to recover it. 00:37:41.035 [2024-11-17 02:57:49.197030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.035 [2024-11-17 02:57:49.197075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.197268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.197317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.197474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.197547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.197715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.197775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.197905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.197939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.198047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.198084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.198218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.198268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.198433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.198476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.198650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.198710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.198892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.198959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.199106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.199144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.199279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.199315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.199513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.199583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.199796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.199838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.200016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.200052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.200210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.200246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.200350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.200403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.200549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.200586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.200767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.200806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.200936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.201002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.201141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.201197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.201330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.201371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.201528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.201590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.201719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.201758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.201910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.201949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.202067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.202117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.202257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.202293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.202463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.202520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.202680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.202734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.202884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.202937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.036 qpair failed and we were unable to recover it. 00:37:41.036 [2024-11-17 02:57:49.203059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.036 [2024-11-17 02:57:49.203125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.203281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.203335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.203465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.203500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.203644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.203682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.203823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.203858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.203977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.204012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.204130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.204167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.204269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.204305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.204440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.204475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.204644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.204682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.204803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.204837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.205011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.205046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.205193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.205246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.205427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.205482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.205651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.205700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.205850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.205887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.206024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.206060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.206178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.206223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.206329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.206377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.206489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.206524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.206669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.206722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.206863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.206898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.207058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.207092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.207266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.207322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.207478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.207519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.207659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.207697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.207837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.207875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.208025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.208066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.208256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.208306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.208475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.208533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.037 [2024-11-17 02:57:49.208690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.037 [2024-11-17 02:57:49.208749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.037 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.208903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.208956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.209068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.209109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.209279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.209329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.209483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.209519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.209631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.209664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.209819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.209879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.210030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.210068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.210272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.210322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.210534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.210595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.210752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.210808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.210946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.210981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.211171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.211226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.211370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.211430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.211674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.211747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.211875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.211929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.212069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.212114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.212233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.212268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.212393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.212465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.212629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.212688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.212852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.212908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.213073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.213116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.213257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.213291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.213413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.213464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.213639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.213677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.213817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.213855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.213990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.214026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.214163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.214213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.214340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.214413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.214553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.214617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.214806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.214867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.214987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.215039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.215176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.215213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.215328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.215371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.215522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.215575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.215755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.215811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.215941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.038 [2024-11-17 02:57:49.216004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.038 qpair failed and we were unable to recover it. 00:37:41.038 [2024-11-17 02:57:49.216182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.216231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.216385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.216426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.216618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.216658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.216827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.216890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.217063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.217109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.217237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.217273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.217407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.217461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.217619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.217673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.217835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.217890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.218015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.218066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.218207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.218257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.218385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.218441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.218570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.218626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.218829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.218886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.219038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.219076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.219223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.219277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.219426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.219464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.219621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.219659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.219853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.219921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.220054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.220120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.220271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.220326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.220497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.220550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.220722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.220772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.220949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.220998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.221116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.221158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.221312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.221348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.221473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.221517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.221655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.221692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.221851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.221909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.222022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.222057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.222208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.222260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.222393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.222431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.222582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.222621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.039 qpair failed and we were unable to recover it. 00:37:41.039 [2024-11-17 02:57:49.222740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.039 [2024-11-17 02:57:49.222792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.222957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.222993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.223110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.223171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.223313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.223369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.223478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.223514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.223692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.223730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.223853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.223888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.224006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.224042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.224180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.224229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.224365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.224420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.224620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.224678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.224814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.224879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.225051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.225086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.225235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.225290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.225493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.225566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.225787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.225847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.226025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.226065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.226196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.226232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.226368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.226417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.226529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.226596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.226769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.226829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.226980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.227015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.227140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.227178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.227331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.227385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.227569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.227640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.227828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.227867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.228008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.228044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.228202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.228256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.228365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.228409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.228614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.228670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.228830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.228865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.040 qpair failed and we were unable to recover it. 00:37:41.040 [2024-11-17 02:57:49.229004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.040 [2024-11-17 02:57:49.229039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.229188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.229231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.229345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.229384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.229523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.229572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.229728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.229767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.229940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.229980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.230178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.230228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.230336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.230373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.230505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.230559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.230716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.230771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.230910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.230945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.231081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.231132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.231260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.231300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.231504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.231562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.231752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.231809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.231972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.232011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.232155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.232210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.232341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.232388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.232543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.232583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.232804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.232848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.233008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.233043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.233173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.233214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.233337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.233381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.233538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.233580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.233706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.233754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.233878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.233918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.234123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.234198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.234353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.234409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.234585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.234625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.234750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.234789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.041 [2024-11-17 02:57:49.234935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.041 [2024-11-17 02:57:49.234974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.041 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.235145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.235195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.235338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.235383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.235557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.235593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.235708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.235764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.235930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.235966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.236083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.236140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.236251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.236286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.236454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.236514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.236644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.236699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.236860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.236902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.237071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.237117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.237256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.237295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.237456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.237496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.237646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.237687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.237831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.237870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.238031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.238068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.238277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.238327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.238475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.238530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.238740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.238795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.238988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.239034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.239153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.239187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.239281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.239315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.239463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.239535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.042 qpair failed and we were unable to recover it. 00:37:41.042 [2024-11-17 02:57:49.239664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.042 [2024-11-17 02:57:49.239730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.239957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.240008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.240165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.240222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.240355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.240416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.240533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.240569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.240749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.240795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.240920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.240955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.241085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.241127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.241268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.241302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.241445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.241480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.241608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.241642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.241774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.241808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.241972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.242007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.242185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.242221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.242332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.242378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.242556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.242606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.242752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.242789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.242942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.242991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.243126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.243165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.243290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.243325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.243441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.243475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.243632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.243688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.243824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.243876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.244013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.244047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.244185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.244225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.244350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.244408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.244536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.244573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.244710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.244746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.244918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.244955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.245089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.245165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.245276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.245312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.245502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.245556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.043 qpair failed and we were unable to recover it. 00:37:41.043 [2024-11-17 02:57:49.245777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.043 [2024-11-17 02:57:49.245849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.245998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.246037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.246214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.246251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.246357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.246393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.246566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.246610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.246835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.246876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.246997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.247049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.247189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.247225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.247344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.247382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.247562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.247620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.247738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.247774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.247938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.247974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.248110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.248146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.248265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.248306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.248453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.248490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.248596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.248632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.248845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.248903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.249078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.249125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.249244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.249280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.249460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.249517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.249730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.249789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.249962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.250001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.250134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.250179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.250360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.250433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.250661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.250733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.250874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.250931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.251101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.251166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.251282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.251317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.251461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.251497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.251658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.251697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.251841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.251880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.252001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.252036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.044 [2024-11-17 02:57:49.252174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.044 [2024-11-17 02:57:49.252211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.044 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.252317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.252352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.252550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.252589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.252759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.252798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.252960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.252999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.253164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.253200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.253329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.253374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.253537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.253572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.253696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.253752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.253911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.253967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.254112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.254162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.254301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.254338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.254534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.254586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.254744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.254797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.254963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.254998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.255132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.255172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.255298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.255334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.255468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.255520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.255688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.255742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.255898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.255951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.256112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.256162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.256330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.256388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.256509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.256545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.256696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.256754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.256892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.256926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.257071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.257119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.257230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.257264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.257379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.257418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.257528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.257563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.257698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.257734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.257872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.257907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.258015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.258050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.258213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.258251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.258391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.045 [2024-11-17 02:57:49.258426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.045 qpair failed and we were unable to recover it. 00:37:41.045 [2024-11-17 02:57:49.258551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.258601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.258750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.258787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.258929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.258964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.259121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.259169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.259312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.259347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.259453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.259489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.259583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.259618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.259746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.259781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.259894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.259928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.260064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.260110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.260271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.260321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.260467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.260510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.260700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.260737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.260849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.260885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.261024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.261066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.261222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.261285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.261438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.261494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.261653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.261694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.261843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.261882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.262043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.262078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.262225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.262275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.262453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.262494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.262723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.262762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.262926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.262981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.263143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.263180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.263283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.263337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.263496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.263553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.263730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.263769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.263899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.263938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.264087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.264159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.264279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.264314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.264495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.046 [2024-11-17 02:57:49.264562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.046 qpair failed and we were unable to recover it. 00:37:41.046 [2024-11-17 02:57:49.264754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.264808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.264978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.265013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.265151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.265187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.265334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.265398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.265540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.265594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.265755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.265808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.265958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.265994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.266139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.266176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.266286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.266323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.266475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.266514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.266692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.266731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.266878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.266917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.267041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.267077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.267214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.267270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.267438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.267509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.267648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.267704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.267864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.267903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.268059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.268106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.268258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.268307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.268434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.268474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.268637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.268694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.268809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.268848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.268973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.269018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.269153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.269190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.269309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.269344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.269460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.269495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.269629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.269664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.269828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.269864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.047 qpair failed and we were unable to recover it. 00:37:41.047 [2024-11-17 02:57:49.269976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.047 [2024-11-17 02:57:49.270016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.270130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.270167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.270292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.270340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.270463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.270519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.270690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.270746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.270895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.270931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.271160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.271195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.271305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.271345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.271488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.271524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.271722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.271791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.272911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.272957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.273136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.273192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.273305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.273340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.273475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.273511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.273646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.273706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.273892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.273941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.274087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.274136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.274251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.274286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.274409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.274447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.274562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.274626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.274758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.274802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.274927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.274965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.275127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.275163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.275287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.275322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.275457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.275491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.275715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.275758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.275913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.275959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.276136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.276187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.276307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.276342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.048 qpair failed and we were unable to recover it. 00:37:41.048 [2024-11-17 02:57:49.276507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.048 [2024-11-17 02:57:49.276548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.276696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.276733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.276919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.276992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.277155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.277194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.277323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.277380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.277523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.277583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.277687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.277723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.277859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.277894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.278018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.278061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.278231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.278278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.278402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.278437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.278612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.278647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.278781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.278815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.278989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.279024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.279153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.279198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.279321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.279354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.279503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.279536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.279668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.279726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.279870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.279904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.280062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.280127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.280305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.280346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.280475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.280515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.280648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.280688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.280876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.280936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.281051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.281093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.281231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.281288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.281464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.281516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.281650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.281684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.281824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.281861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.282010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.282057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.282225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.282286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.282505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.282553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.282676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.282723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.282872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.282906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.049 [2024-11-17 02:57:49.283017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.049 [2024-11-17 02:57:49.283050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.049 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.283176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.283214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.283331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.283365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.283535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.283569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.283700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.283736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.283893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.283932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.284064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.284134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.284278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.284315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.284453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.284503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.284678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.284717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.284870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.284909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.285109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.285150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.285287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.285323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.285438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.285481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.285615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.285676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.285853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.285909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.286073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.286126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.286272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.286307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.286464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.286520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.286629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.286664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.286831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.286870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.287038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.287108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.287278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.287315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.287567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.287629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.287804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.287869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.288020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.288058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.288220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.288255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.288365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.288431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.288607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.288647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.288828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.288891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.289004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.289047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.289183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.289225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.289339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.289373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.289541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.050 [2024-11-17 02:57:49.289579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.050 qpair failed and we were unable to recover it. 00:37:41.050 [2024-11-17 02:57:49.289718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.289755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.289889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.289922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.290064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.290118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.290272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.290306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.290475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.290512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.290657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.290701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.290904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.290967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.291118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.291155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.291331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.291393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.291589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.291631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.291775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.291815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.291985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.292020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.292138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.292174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.292302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.292338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.292482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.292517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.292616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.292651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.292803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.292839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.292993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.293049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.293201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.293237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.293376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.293430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.293611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.293672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.293853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.293891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.294044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.294088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.294249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.294283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.294415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.294453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.294586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.294644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.294777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.294814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.294961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.294999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.295152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.295190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.295317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.295365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.295566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.295641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.295829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.295868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.296039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.051 [2024-11-17 02:57:49.296087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.051 qpair failed and we were unable to recover it. 00:37:41.051 [2024-11-17 02:57:49.296248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.296281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.296393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.296427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.296604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.296663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.296840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.296879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.297000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.297051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.297177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.297213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.297323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.297358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.297487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.297527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.297693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.297730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.297929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.297967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.298125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.298161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.298280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.298320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.298488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.298539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.298643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.298682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.298869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.298907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.299065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.299123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.299256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.299290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.299427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.299471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.299597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.299634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.299770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.299806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.299964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.300003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.300174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.300210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.300321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.300357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.300521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.300584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.300792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.300835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.300979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.301026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.301211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.301246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.301380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.301417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.301623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.301679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.301813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.301873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.302021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.302059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.302228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.302263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.302418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.302461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.052 qpair failed and we were unable to recover it. 00:37:41.052 [2024-11-17 02:57:49.302639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.052 [2024-11-17 02:57:49.302678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.302857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.302915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.303080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.303136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.303273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.303308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.303418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.303461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.303611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.303646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.303803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.303838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.303949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.303985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.304134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.304170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.304302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.304337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.304530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.304577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.304726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.304765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.304912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.304951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.305122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.305157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.305287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.305322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.305449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.305502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.305650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.305688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.305826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.305864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.305990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.306029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.306206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.306242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.306375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.306420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.306543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.306578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.306724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.306761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.306922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.306963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.307092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.307153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.307292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.307324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.307485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.307519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.307653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.307692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.307850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.307889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.308031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.053 [2024-11-17 02:57:49.308087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.053 qpair failed and we were unable to recover it. 00:37:41.053 [2024-11-17 02:57:49.308255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.308290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.308408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.308450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.308582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.308644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.308800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.308839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.309013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.309052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.309191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.309225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.309333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.309367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.309512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.309547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.309658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.309691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.309806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.309846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.309985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.310033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.310202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.310237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.310347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.310383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.310569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.310633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.310806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.310845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.310996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.311031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.311151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.311187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.311292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.311327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.311499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.311533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.311694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.311732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.311887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.311939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.312121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.312187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.312322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.312385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.312533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.312572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.312766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.312804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.312912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.312950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.313087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.313127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.313236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.313272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.313386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.313424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.313624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.313686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.313834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.313896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.314022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.314055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.314194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.314230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.314349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.314382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.054 [2024-11-17 02:57:49.314485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.054 [2024-11-17 02:57:49.314524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.054 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.314701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.314761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.314919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.314956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.315071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.315131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.315233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.315268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.315413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.315445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.315555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.315616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.315763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.315813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.315928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.315977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.316128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.316178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.316290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.316346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.316498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.316537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.316676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.316715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.316896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.316935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.317091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.317151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.317261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.317296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.317467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.317505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.317621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.317659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.317817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.317864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.318028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.318062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.318189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.318222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.318394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.318431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.318571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.318606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.318773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.318818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.318966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.319003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.319158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.319193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.319303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.319356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.319482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.319526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.319714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.319750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.319874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.319912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.320058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.320109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.320244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.320277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.320426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.320466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.320607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.320644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.320873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.320910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.055 [2024-11-17 02:57:49.321056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.055 [2024-11-17 02:57:49.321117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.055 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.321246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.321280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.321415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.321461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.321631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.321669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.321812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.321849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.322001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.322044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.322181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.322215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.322331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.322365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.322493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.322529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.322673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.322711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.322836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.322886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.323077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.323127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.323295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.323334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.323464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.323501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.323729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.323766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.323903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.323935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.324132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.324167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.324272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.324308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.324417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.324461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.324602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.324636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.324782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.324820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.324999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.325036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.325181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.325220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.325365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.325398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.325535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.325569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.325705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.325758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.325893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.325944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.326067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.326148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.326303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.326337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.326471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.326504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.326638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.326678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.056 [2024-11-17 02:57:49.326852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.056 [2024-11-17 02:57:49.326886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.056 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.327019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.327053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.327195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.327229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.327333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.327367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.327496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.327532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.327680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.327731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.327883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.327921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.328047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.328092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.328239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.328287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.328472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.328521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.328685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.328726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.328846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.328885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.328999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.329039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.329224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.329258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.329375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.329415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.329555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.329601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.329787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.329823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.329940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.329977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.330092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.330163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.330280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.330312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.330421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.330464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.330629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.330674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.330878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.330915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.331056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.331117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.331264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.331312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.331485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.331522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.331723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.331780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.331929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.331967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.332128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.332167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.332306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.332341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.332510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.332546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.332709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.332744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.332947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.332985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.333126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.333185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.333332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.057 [2024-11-17 02:57:49.333364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.057 qpair failed and we were unable to recover it. 00:37:41.057 [2024-11-17 02:57:49.333549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.333585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.333738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.333786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.333937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.333971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.334073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.334119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.334252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.334290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.334452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.334487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.334637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.334683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.334802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.334837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.334972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.335009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.335141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.335177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.335280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.335313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.335430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.335470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.335607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.335640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.335884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.335934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.336076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.336132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.336233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.336269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.336376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.336418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.336529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.336563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.336699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.336735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.336841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.336878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.337026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.337059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.337208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.337241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.337349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.337388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.337534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.337567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.337699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.337731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.337836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.337871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.338006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.338040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.338174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.058 [2024-11-17 02:57:49.338211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.058 qpair failed and we were unable to recover it. 00:37:41.058 [2024-11-17 02:57:49.338356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.338392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.338549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.338583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.338714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.338752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.338886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.338920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.339056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.339092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.339225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.339258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.339398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.339434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.339542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.339577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.339680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.339714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.339827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.339862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.339964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.340005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.340152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.340201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.340359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.340397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.340517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.340552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.340668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.340703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.340837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.340872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.340986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.341021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.341183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.341217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.341350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.341395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.341507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.341542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.341669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.341703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.341819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.341855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.341985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.342018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.342134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.342167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.342297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.342334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.342459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.342497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.342632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.342665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.342776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.342811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.342958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.342991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.343162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.343214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.343338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.343373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.343495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.343530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.343658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.343692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.059 qpair failed and we were unable to recover it. 00:37:41.059 [2024-11-17 02:57:49.343830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.059 [2024-11-17 02:57:49.343863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.343991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.344026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.344170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.344207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.344368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.344413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.344556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.344591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.344706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.344742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.344879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.344911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.345023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.345062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.345192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.345226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.345387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.345429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.345552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.345585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.345727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.345763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.345914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.345962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.346149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.346186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.346295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.346333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.346484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.346518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.346648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.346682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.346795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.346830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.346964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.347001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.347119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.347154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.347255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.347290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.347424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.347459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.347589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.347624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.347753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.347787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.347922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.347958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.348064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.348107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.348234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.348270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.348380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.348423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.348583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.348617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.348727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.348761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.348869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.348905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.349045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.349091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.349210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.349250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.349385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.060 [2024-11-17 02:57:49.349423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.060 qpair failed and we were unable to recover it. 00:37:41.060 [2024-11-17 02:57:49.349562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.349596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.349774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.349813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.349985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.350021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.350182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.350218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.350328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.350362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.350480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.350514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.350616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.350651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.350749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.350783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.350923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.350960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.351062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.351104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.351245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.351281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.351392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.351428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.351574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.351607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.351740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.351781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.351919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.351952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.352093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.352137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.352278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.352311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.352445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.352480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.352614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.352649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.352766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.352804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.352941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.352976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.353090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.353137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.353270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.353302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.353418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.353455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.353611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.353651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.353828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.353861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.353997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.354034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.354173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.354207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.354313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.354349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.354493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.354529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.354673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.354710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.354876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.354910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.355041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.355090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.355265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.355299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.355466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.355501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.355635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.355672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.061 qpair failed and we were unable to recover it. 00:37:41.061 [2024-11-17 02:57:49.355777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.061 [2024-11-17 02:57:49.355812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.355946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.355980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.356092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.356139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.356272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.356306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.356467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.356502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.356602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.356635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.356765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.356802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.356950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.356985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.357156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.357207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.357346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.357381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.357529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.357563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.357670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.357704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.357837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.357871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.358007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.358042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.358191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.358226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.358332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.358365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.358516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.358558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.358738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.358791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.358950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.358987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.359111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.359146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.359255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.359292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.359461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.359495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.359594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.359628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.359799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.359834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.359976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.360011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.360144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.360181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.360347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.360387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.360490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.360525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.360661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.360696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.360881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.360920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.361111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.361165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.361277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.361311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.361415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.361457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.361565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.361599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.361758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.361794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.062 [2024-11-17 02:57:49.361932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.062 [2024-11-17 02:57:49.361968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.062 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.362088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.362129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.362271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.362306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.362439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.362471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.362628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.362669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.362781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.362816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.362930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.362966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.363119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.363161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.363267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.363302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.363419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.363454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.363590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.363625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.363761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.363796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.363929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.363968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.364109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.364151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.364258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.364293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.364429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.364466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.364604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.364641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.364753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.364787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.364929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.364963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.365072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.365120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.365281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.365321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.365480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.365513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.365644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.365677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.365816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.365852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.365961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.365996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.366156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.366190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.366326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.366361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.366472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.366504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.366640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.366674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.366839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.366873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.367026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.367075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.367209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.367246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.367450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.063 [2024-11-17 02:57:49.367487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.063 qpair failed and we were unable to recover it. 00:37:41.063 [2024-11-17 02:57:49.367618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.367663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.367801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.367835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.367968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.368001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.368140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.368176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.368289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.368322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.368452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.368489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.368619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.368654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.368767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.368802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.368943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.368979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.369117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.369153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.369288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.369322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.369467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.369516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.369744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.369799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.369963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.369997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.370106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.370148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.370288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.370323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.370448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.370481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.370616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.370650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.370755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.370792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.370911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.370944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.371090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.371133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.371286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.371334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.371514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.371551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.371661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.371696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.371829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.371863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.371965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.372000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.372144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.372179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.372356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.372404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.372567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.372605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.372723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.372761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.372900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.372934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.373066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.373123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.373228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.373262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.373450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.373492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.373647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.373681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.373811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.373845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.373984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.374018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.374148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.374182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.064 [2024-11-17 02:57:49.374327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.064 [2024-11-17 02:57:49.374366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.064 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.374476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.374512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.374672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.374706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.374877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.374914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.375089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.375130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.375267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.375301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.375401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.375435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.375584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.375621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.375766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.375799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.375939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.375973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.376121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.376156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.376299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.376332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.376506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.376546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.376698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.376736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.376861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.376894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.377013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.377048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.377240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.377294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.377413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.377450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.377619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.377655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.377784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.377819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.377981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.378015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.378156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.378191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.378324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.378358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.378471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.378505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.378680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.378747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.378869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.378909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.379066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.379107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.379248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.379282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.379419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.379453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.379601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.379634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.379749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.379785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.379911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.379948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.380083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.380132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.380275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.380310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.380448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.380482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.380644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.380677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.380814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.380848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.065 [2024-11-17 02:57:49.381007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-11-17 02:57:49.381060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.065 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.381182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.381218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.381377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.381411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.381511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.381546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.381651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.381685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.381829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.381864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.382011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.382046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.382192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.382228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.382361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.382395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.382504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.382537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.382678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.382711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.382820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.382856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.382996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.383032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.383186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.383224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.383384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.383419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.383588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.383622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.383725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.383758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.383918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.383952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.384113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.384163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.384285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.384328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.384444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.384481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.384658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.384693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.384831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.384876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.385035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.385074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.385271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.385305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.385419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.385451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.385590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.385642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.385807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.385846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.385991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.386039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.386176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.386213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.386318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.386352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.386494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.386528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.386663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.386697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.386818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.386860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.066 [2024-11-17 02:57:49.387036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-11-17 02:57:49.387116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.066 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.387245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.387290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.387433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.387468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.387578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.387614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.387817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.387856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.387973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.388010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.388205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.388241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.388349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.388389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.388506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.388541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.388728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.388779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.388889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.388923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.389053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.389087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.389234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.389273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.389449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.389486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.389595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.389632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.389762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.389800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.389967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.390006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.390160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.390195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.390333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.390369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.390576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.390629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.390751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.390793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.390916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.390954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.391077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.391119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.391286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.391321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.391465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.391521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.391685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.391727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.391920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.391984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.392178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.392215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.392351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.392386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.392496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.392531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.392693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.392727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.392852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.392900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.393023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.393059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.393208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.393244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.067 qpair failed and we were unable to recover it. 00:37:41.067 [2024-11-17 02:57:49.393380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-11-17 02:57:49.393413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.393572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.393629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.393820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.393877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.394057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.394104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.394275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.394310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.394474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.394512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.394663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.394701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.394825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.394862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.395003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.395041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.395181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.395216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.395347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.395381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.395485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.395519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.395662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.395697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.395867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.395933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.396073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.396122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.396229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.396265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.396370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.396405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.396531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.396579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.396727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.396780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.396895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.396948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.397052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.397085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.397227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.397261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.397397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.397431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.397579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.397617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.397759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.397796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.397954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.397996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.398133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.398170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.398311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.398348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.398508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.398545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.398669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.398720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.398881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.398925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.399084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.399150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.399264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.399298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.399433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.399467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.399571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.399604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.399766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.399804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.399940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.399975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.400127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.400165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.400311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.400364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.400520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.400556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.400705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.068 [2024-11-17 02:57:49.400743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.068 qpair failed and we were unable to recover it. 00:37:41.068 [2024-11-17 02:57:49.400902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.400938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.401057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.401103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.401250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.401285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.401447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.401486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.401641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.401682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.401826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.401864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.401985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.402019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.402162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.402198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.402333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.402382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.402541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.402578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.402761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.402798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.402913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.402958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.403139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.403188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.403311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.403346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.403511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.403545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.403660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.403694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.403849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.403887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.404031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.404069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.404217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.404256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.404376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.404417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.404596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.404632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.404782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.404820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.404999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.405036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.405181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.405215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.405358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.405391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.405548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.405585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.405703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.405744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.405906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.405946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.406074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.406116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.406279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.406313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.406453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.406494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.406662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.406700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.406836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.406874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.407016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.407055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.407231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.407266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.407410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.407476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.407635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.407696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.407927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.407964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.408092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.408162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.408279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.408312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.408450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.408503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.069 [2024-11-17 02:57:49.408655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.069 [2024-11-17 02:57:49.408692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.069 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.408847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.408897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.409073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.409124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.409268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.409303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.409407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.409441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.409590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.409628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.409797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.409860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.410000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.410038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.410195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.410251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.410416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.410455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.410607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.410645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.410759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.410795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.410953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.410991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.411200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.411235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.411339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.411371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.411506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.411557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.411721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.411774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.411892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.411944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.412112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.412147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.412249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.412283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.412452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.412487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.412669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.412706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.412859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.412898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.413070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.413119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.413275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.413316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.413437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.413470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.413577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.413610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.413774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.413810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.413990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.414030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.414204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.414244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.414382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.414417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.414533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.414567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.414700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.414735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.414888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.414926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.415043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.415087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.415248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.415283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.415417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.415450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.415587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.415622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.415763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.415797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.415961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.416020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.416170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.416205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.416367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.416426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.416605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.416659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.416815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.416850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.416997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.417041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.417174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.070 [2024-11-17 02:57:49.417209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.070 qpair failed and we were unable to recover it. 00:37:41.070 [2024-11-17 02:57:49.417340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.417373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.417471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.417523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.417684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.417722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.417867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.417920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.418091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.418151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.418287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.418320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.418456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.418500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.418652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.418689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.418863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.418900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.419055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.419087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.419262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.419321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.419499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.419558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.419729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.419770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.419925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.419963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.420147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.420196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.420342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.420379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.420522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.420557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.420679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.420717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.420834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.420873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.420983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.421021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.421175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.421210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.421381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.421416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.421547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.421581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.421746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.421785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.421939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.421977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.422176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.422211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.422345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.422379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.422577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.422647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.422759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.422796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.422981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.423033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.423176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.423212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.423314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.423369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.423522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.423560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.423680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.423718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.423869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.423908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.424061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.424103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.424263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.424297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.424441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.424476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.424659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.424697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.424836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.424874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.425017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.425055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.425203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.425242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.071 [2024-11-17 02:57:49.425376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.071 [2024-11-17 02:57:49.425443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.071 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.425590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.425631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.425824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.425864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.426018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.426053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.426198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.426233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.426339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.426373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.426573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.426612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.426773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.426810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.426932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.426978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.427145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.427189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.427332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.427366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.427533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.427587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.427749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.427805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.427912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.427947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.428084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.428128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.428243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.428299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.428451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.428489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.428759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.428817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.428934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.428970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.429144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.429197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.429351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.429386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.429563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.429603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.429739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.429775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.429897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.429953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.430113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.430163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.430284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.430321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.430492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.430532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.430652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.430690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.430832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.430868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.431023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.431057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.431199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.431232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.431327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.431359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.431473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.431510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.431631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.431668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.431786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.431822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.432022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.432058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.432175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.432212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.432332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.432370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.432500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.432539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.432685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.432723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.432856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.432900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.433086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.433149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.433257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.433292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.433399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.433434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.433567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.433601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.072 [2024-11-17 02:57:49.433708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.072 [2024-11-17 02:57:49.433745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.072 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.433905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.433959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.434115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.434162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.434340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.434380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.434503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.434538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.434660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.434698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.434848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.434899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.435031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.435069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.435261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.435318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.435474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.435529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.435712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.435764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.435899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.435936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.436111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.436151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.436312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.436363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.436514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.436566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.436696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.436750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.436878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.436911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.437044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.437078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.437239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.437288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.437427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.437474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.437621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.437658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.437762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.437797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.437931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.437965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.438127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.438162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.438291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.438346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.438542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.438593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.438750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.438803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.438963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.438997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.439141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.439180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.439335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.439383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.439557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.439594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.439781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.439840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.439980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.440015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.440186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.440241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.440394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.440460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.440587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.440643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.440783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.440817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.440985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.441021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.441185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.441239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.441392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.441445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.441682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.441741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.441907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.441941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.442093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.442153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.442303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.442361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.442581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.073 [2024-11-17 02:57:49.442615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.073 qpair failed and we were unable to recover it. 00:37:41.073 [2024-11-17 02:57:49.442783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.442841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.442941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.442975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.443213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.443267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.443392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.443445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.443592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.443630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.443737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.443775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.443922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.443976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.444189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.444237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.444389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.444442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.444656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.444715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.444817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.444852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.444963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.444996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.445162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.445203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.445336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.445391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.445645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.445706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.445850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.445901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3142032 Killed "${NVMF_APP[@]}" "$@" 00:37:41.074 [2024-11-17 02:57:49.446053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.446088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.446243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.446289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 02:57:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:37:41.074 [2024-11-17 02:57:49.446521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.446581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 02:57:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:41.074 [2024-11-17 02:57:49.446731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.446793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 02:57:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:41.074 [2024-11-17 02:57:49.446915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.446954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 02:57:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:41.074 [2024-11-17 02:57:49.447124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.447162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.074 02:57:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.447327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.447401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.447574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.447627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.447784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.447846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.447970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.448002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.448131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.448165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.448309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.448343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.448490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.448526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.448656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.448688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.448849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.448885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.449035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.449068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.449226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.449275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.449442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.449482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.449625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.449663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.449804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.449842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.449973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.450010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.450185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.450234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.450367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.450409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.450517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.450553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 [2024-11-17 02:57:49.450749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 02:57:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3142711 00:37:41.074 [2024-11-17 02:57:49.450805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 02:57:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:41.074 02:57:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3142711 00:37:41.074 [2024-11-17 02:57:49.450959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.451002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 02:57:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3142711 ']' 00:37:41.074 [2024-11-17 02:57:49.451161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.451210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 02:57:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:41.074 [2024-11-17 02:57:49.451326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.074 [2024-11-17 02:57:49.451361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.074 qpair failed and we were unable to recover it. 00:37:41.074 02:57:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:41.075 02:57:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:41.075 [2024-11-17 02:57:49.451536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:41.075 [2024-11-17 02:57:49.451593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 02:57:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:41.075 [2024-11-17 02:57:49.451756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.451812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.075 02:57:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.451923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.451958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.452112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.452147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.452280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.452320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.452491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.452553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.452755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.452810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.452970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.453004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.453122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.453161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.453293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.453342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.453549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.453605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.453763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.453818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.453950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.453984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.454146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.454181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.454312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.454369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.454525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.454578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.454749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.454783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.454918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.454952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.455053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.455091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.455245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.455280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.455427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.455461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.455568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.455603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.455761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.455815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.455964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.456012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.456201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.456238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.456353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.456408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.456594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.456649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.456825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.456878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.457021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.457056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.457194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.457227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.457358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.457390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.457570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.457607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.457809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.457846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.457984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.458017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.458162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.458195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.458312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.458344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.075 [2024-11-17 02:57:49.458478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.075 [2024-11-17 02:57:49.458510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.075 qpair failed and we were unable to recover it. 00:37:41.076 [2024-11-17 02:57:49.458607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.076 [2024-11-17 02:57:49.458659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.076 qpair failed and we were unable to recover it. 00:37:41.076 [2024-11-17 02:57:49.458825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.076 [2024-11-17 02:57:49.458891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.076 qpair failed and we were unable to recover it. 00:37:41.076 [2024-11-17 02:57:49.459053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.076 [2024-11-17 02:57:49.459108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.076 qpair failed and we were unable to recover it. 00:37:41.076 [2024-11-17 02:57:49.459233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.076 [2024-11-17 02:57:49.459276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.076 qpair failed and we were unable to recover it. 00:37:41.076 [2024-11-17 02:57:49.459422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.076 [2024-11-17 02:57:49.459468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.076 qpair failed and we were unable to recover it. 00:37:41.076 [2024-11-17 02:57:49.459651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.076 [2024-11-17 02:57:49.459720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.076 qpair failed and we were unable to recover it. 00:37:41.076 [2024-11-17 02:57:49.459881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.076 [2024-11-17 02:57:49.459936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.076 qpair failed and we were unable to recover it. 00:37:41.076 [2024-11-17 02:57:49.460090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.076 [2024-11-17 02:57:49.460154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.076 qpair failed and we were unable to recover it. 00:37:41.076 [2024-11-17 02:57:49.460264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.076 [2024-11-17 02:57:49.460298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.076 qpair failed and we were unable to recover it. 00:37:41.076 [2024-11-17 02:57:49.460432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.076 [2024-11-17 02:57:49.460482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.076 qpair failed and we were unable to recover it. 00:37:41.076 [2024-11-17 02:57:49.460590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.076 [2024-11-17 02:57:49.460642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.076 qpair failed and we were unable to recover it. 00:37:41.076 [2024-11-17 02:57:49.460878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.076 [2024-11-17 02:57:49.460915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.076 qpair failed and we were unable to recover it. 00:37:41.076 [2024-11-17 02:57:49.461048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.076 [2024-11-17 02:57:49.461084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.076 qpair failed and we were unable to recover it. 00:37:41.076 [2024-11-17 02:57:49.461200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.076 [2024-11-17 02:57:49.461232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.076 qpair failed and we were unable to recover it. 00:37:41.076 [2024-11-17 02:57:49.461358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.076 [2024-11-17 02:57:49.461394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.076 qpair failed and we were unable to recover it. 00:37:41.076 [2024-11-17 02:57:49.461564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.076 [2024-11-17 02:57:49.461601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.076 qpair failed and we were unable to recover it. 00:37:41.076 [2024-11-17 02:57:49.461709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.076 [2024-11-17 02:57:49.461744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.076 qpair failed and we were unable to recover it. 00:37:41.076 [2024-11-17 02:57:49.461897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.359 [2024-11-17 02:57:49.461946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.359 qpair failed and we were unable to recover it. 00:37:41.359 [2024-11-17 02:57:49.462092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.359 [2024-11-17 02:57:49.462154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.359 qpair failed and we were unable to recover it. 00:37:41.359 [2024-11-17 02:57:49.462272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.359 [2024-11-17 02:57:49.462310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.359 qpair failed and we were unable to recover it. 00:37:41.359 [2024-11-17 02:57:49.462466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.359 [2024-11-17 02:57:49.462504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.359 qpair failed and we were unable to recover it. 00:37:41.359 [2024-11-17 02:57:49.462621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.359 [2024-11-17 02:57:49.462658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.359 qpair failed and we were unable to recover it. 00:37:41.359 [2024-11-17 02:57:49.462772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.359 [2024-11-17 02:57:49.462808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.359 qpair failed and we were unable to recover it. 00:37:41.359 [2024-11-17 02:57:49.462946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.359 [2024-11-17 02:57:49.462983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.359 qpair failed and we were unable to recover it. 00:37:41.359 [2024-11-17 02:57:49.463165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.359 [2024-11-17 02:57:49.463216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.359 qpair failed and we were unable to recover it. 00:37:41.359 [2024-11-17 02:57:49.463337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.359 [2024-11-17 02:57:49.463383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.359 qpair failed and we were unable to recover it. 00:37:41.359 [2024-11-17 02:57:49.463543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.359 [2024-11-17 02:57:49.463599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.359 qpair failed and we were unable to recover it. 00:37:41.359 [2024-11-17 02:57:49.463722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.359 [2024-11-17 02:57:49.463775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.359 qpair failed and we were unable to recover it. 00:37:41.359 [2024-11-17 02:57:49.463918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.359 [2024-11-17 02:57:49.463953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.359 qpair failed and we were unable to recover it. 00:37:41.359 [2024-11-17 02:57:49.464113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.359 [2024-11-17 02:57:49.464152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.359 qpair failed and we were unable to recover it. 00:37:41.359 [2024-11-17 02:57:49.464267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.464301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.464420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.464452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.464609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.464642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.464765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.464814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.464944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.464977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.465086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.465125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.465255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.465289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.465459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.465511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.465692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.465731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.465840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.465878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.465994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.466030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.466226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.466261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.466382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.466420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.466559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.466602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.466739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.466776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.466925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.466986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.467140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.467175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.467360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.467404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.467540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.467581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.467710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.467751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.467921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.467966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.468113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.468166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.468300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.468334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.468441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.468492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.468624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.468662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.468799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.468836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.468945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.468980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.360 qpair failed and we were unable to recover it. 00:37:41.360 [2024-11-17 02:57:49.469103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.360 [2024-11-17 02:57:49.469136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.469269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.469302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.469465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.469499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.469607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.469642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.469753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.469786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.469921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.469957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.470091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.470132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.470276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.470312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.470430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.470466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.470602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.470637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.470739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.470773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.470936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.470972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.471117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.471171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.471323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.471359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.471479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.471514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.471624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.471658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.471824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.471858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.471996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.472033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.472161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.472198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.472329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.472364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.472525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.472558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.472658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.472692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.472821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.472856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.473014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.473048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.473193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.473228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.473339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.473373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.473488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.473530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.473625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.473660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.361 qpair failed and we were unable to recover it. 00:37:41.361 [2024-11-17 02:57:49.473760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.361 [2024-11-17 02:57:49.473795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.473911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.473947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.474066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.474123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.474281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.474329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.474444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.474479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.474647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.474681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.474812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.474845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.474957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.474991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.475127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.475162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.475303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.475337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.475451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.475487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.475615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.475664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.475809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.475847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.475983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.476031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.476158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.476194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.476305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.476338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.476497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.476531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.476691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.476727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.476830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.476865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.476974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.477010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.477127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.477163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.477270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.477304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.477437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.477469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.477605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.477638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.477767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.477799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.477940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.477976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.478094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.478149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.478318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.478374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.478518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.478554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.478658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.478692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.478804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.478838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.478970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.479005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.479158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.479208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.479328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.479367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.479527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.362 [2024-11-17 02:57:49.479575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.362 qpair failed and we were unable to recover it. 00:37:41.362 [2024-11-17 02:57:49.479716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.479752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.479862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.479900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.480041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.480077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.480214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.480257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.480414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.480449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.480581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.480614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.480725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.480758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.480871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.480904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.481014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.481046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.481190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.481224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.481336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.481373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.481478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.481513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.481628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.481663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.481794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.481828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.481979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.482028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.482186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.482234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.482342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.482376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.482508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.482541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.482698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.482732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.482864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.482897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.483017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.483066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.483215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.483264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.483453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.483502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.483643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.483679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.483813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.483847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.483951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.483985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.484153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.484188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.484302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.484341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.484476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.484510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.484618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.484654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.484824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.484859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.485011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.485045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.485179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.485227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.485407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.485443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.485557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.485592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.485710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.363 [2024-11-17 02:57:49.485744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.363 qpair failed and we were unable to recover it. 00:37:41.363 [2024-11-17 02:57:49.485860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.485894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.486028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.486062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.486205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.486239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.486385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.486422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.486566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.486600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.486708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.486741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.486877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.486910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.487015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.487053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.487245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.487308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.487430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.487465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.487631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.487665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.487797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.487831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.487958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.487992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.488090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.488130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.488301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.488337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.488476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.488509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.488608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.488641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.488777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.488811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.488945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.488979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.489143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.489178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.489277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.489311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.489439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.489488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.489598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.489635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.489768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.489803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.489912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.489945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.490061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.490093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.490208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.490240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.490373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.490406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.490513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.490547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.490686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.364 [2024-11-17 02:57:49.490719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.364 qpair failed and we were unable to recover it. 00:37:41.364 [2024-11-17 02:57:49.490830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.490866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.491011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.491045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.491167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.491202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.491307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.491341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.491493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.491527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.491658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.491692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.491801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.491835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.491964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.492014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.492176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.492224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.492406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.492451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.492587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.492621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.492761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.492795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.492976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.493013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.493165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.493212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.493367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.493417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.493555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.493593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.493697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.493732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.493863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.493904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.494048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.494084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.494246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.494307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.494481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.494517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.494622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.494655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.494821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.494856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.494968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.495007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.495152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.495189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.495350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.495385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.495480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.495514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.495656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.365 [2024-11-17 02:57:49.495690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.365 qpair failed and we were unable to recover it. 00:37:41.365 [2024-11-17 02:57:49.495821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.495870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.496012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.496048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.496189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.496237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.496365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.496411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.496544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.496577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.496688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.496721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.496846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.496879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.497036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.497068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.497236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.497269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.497368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.497406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.497559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.497607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.497743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.497779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.497919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.497954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.498104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.498139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.498263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.498312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.498460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.498495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.498631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.498664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.498783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.498816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.498918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.498951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.499110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.499160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.499307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.499343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.499450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.499485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.499645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.499680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.499817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.499851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.499985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.500019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.500149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.500197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.500382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.500433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.500584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.500621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.500743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.500778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.500916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.500968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.501081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.501125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.501261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.501295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.501442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.366 [2024-11-17 02:57:49.501490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.366 qpair failed and we were unable to recover it. 00:37:41.366 [2024-11-17 02:57:49.501641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.501677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.501779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.501813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.501919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.501953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.502083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.502124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.502237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.502273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.502409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.502444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.502581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.502615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.502720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.502754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.502871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.502906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.503021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.503068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.503218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.503253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.503416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.503449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.503548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.503581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.503681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.503714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.503878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.503915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.504023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.504058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.504195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.504243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.504357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.504403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.504540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.504573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.504679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.504713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.504847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.504881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.505024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.505060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.505177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.505213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.505339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.505387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.505494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.505528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.505655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.505687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.505801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.505834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.505967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.506001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.506136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.506170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.506298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.506331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.506452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.506485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.506592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.367 [2024-11-17 02:57:49.506624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.367 qpair failed and we were unable to recover it. 00:37:41.367 [2024-11-17 02:57:49.506747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.506783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.506896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.506930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.507031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.507067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.507213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.507246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.507356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.507394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.507546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.507579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.507736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.507770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.507877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.507910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.508090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.508146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.508290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.508326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.508478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.508512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.508622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.508656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.508790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.508822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.508959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.508992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.509132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.509168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.509293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.509341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.509521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.509569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.509712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.509746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.509888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.509921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.510046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.510086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.510191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.510224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.510331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.510364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.510470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.510503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.510605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.510637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.510770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.510806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.510939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.510977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.511111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.511165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.511314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.511351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.511455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.511489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.511648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.511682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.511796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.368 [2024-11-17 02:57:49.511831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.368 qpair failed and we were unable to recover it. 00:37:41.368 [2024-11-17 02:57:49.511959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.512008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.512134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.512172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.512312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.512346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.512457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.512491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.512622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.512655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.512784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.512819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.512961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.512996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.513108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.513148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.513260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.513295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.513437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.513471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.513581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.513616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.513780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.513815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.513950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.513986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.514149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.514203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.514315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.514352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.514470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.514504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.514620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.514654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.514783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.514817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.514928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.514964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.515128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.515164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.515306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.515343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.515512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.515547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.515683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.515715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.515820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.515853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.515983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.516016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.516153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.516189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.516353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.516387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.516571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.516606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.516740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.516774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.516898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.369 [2024-11-17 02:57:49.516945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.369 qpair failed and we were unable to recover it. 00:37:41.369 [2024-11-17 02:57:49.517105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.517154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.517278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.517314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.517446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.517478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.517611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.517643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.517775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.517809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.517950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.517986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.518131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.518170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.518278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.518314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.518450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.518485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.518591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.518625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.518786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.518835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.518984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.519019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.519161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.519197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.519313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.519347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.519481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.519515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.519649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.519683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.519823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.519858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.520006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.520044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.520192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.520240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.520390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.520426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.520584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.520617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.520752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.520785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.520893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.520925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.521021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.521060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.521208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.521242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.521380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.521416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.521572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.521620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.521793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.521829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.370 [2024-11-17 02:57:49.521937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.370 [2024-11-17 02:57:49.521971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.370 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.522113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.522148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.522258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.522294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.522439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.522474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.522614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.522648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.522784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.522817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.522953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.522986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.523131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.523166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.523276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.523311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.523450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.523486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.523617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.523652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.523787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.523822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.523928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.523962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.524116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.524153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.524281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.524314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.524475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.524508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.524643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.524676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.524808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.524841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.524965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.524998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.525145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.525181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.525296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.525330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.525441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.525477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.525653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.525688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.525825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.525858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.525991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.526024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.526153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.526188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.526316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.526349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.526479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.526512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.526640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.526673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.526777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.526809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.526955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.527003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.527145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.527182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.527319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.527354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.527491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.527525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.527632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.527667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.527829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.527867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.527972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.371 [2024-11-17 02:57:49.528006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.371 qpair failed and we were unable to recover it. 00:37:41.371 [2024-11-17 02:57:49.528156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.528189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.528318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.528367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.528529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.528566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.528674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.528709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.528868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.528902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.529048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.529105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.529263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.529311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.529449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.529487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.529652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.529686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.529785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.529819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.529961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.529997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.530153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.530202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.530327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.530366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.530469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.530504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.530618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.530652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.530790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.530824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.530932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.530969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.531121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.531171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.531297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.531336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.531467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.531501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.531631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.531663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.531792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.531825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.531988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.532023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.532164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.532200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.532313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.532351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.532502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.532538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.532647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.532681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.532810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.532845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.533004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.533039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.533203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.533252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.533398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.533434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.533538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.533574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.372 [2024-11-17 02:57:49.533714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.372 [2024-11-17 02:57:49.533748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.372 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.533859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.533906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.534040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.534087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.534239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.534276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.534388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.534426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.534587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.534622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.534784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.534824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.534931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.534965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.535073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.535123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.535272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.535320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.535463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.535501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.535603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.535639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.535748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.535783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.535920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.535955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.536091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.536138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.536279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.536314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.536457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.536505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.536619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.536655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.536821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.536856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.536994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.537029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.537192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.537228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.537357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.537405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.537563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.537600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.537698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.537733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.537844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.537879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.538016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.538051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.538216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.538264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.538415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.538451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.538567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.538602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.538742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.538776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.538912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.538950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.539050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.539085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.539206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.539241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.373 qpair failed and we were unable to recover it. 00:37:41.373 [2024-11-17 02:57:49.539361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.373 [2024-11-17 02:57:49.539409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.539567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.539603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.539718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.539755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.539896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.539931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.540060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.540102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.540225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.540260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.540372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.540406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.540508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.540544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.540653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.540688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.540857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.540893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.541000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.541034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.541204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.541253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.541402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.541439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.541560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.541593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.541737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.541772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.541913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.541949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.542083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.542127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.542263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.542297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.542404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.542438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.542575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.542609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.542742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.542778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.542942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.542978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.543084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.543127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.543266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.543299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.543402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.543437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.543606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.543639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.543777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.543811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.543930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.543964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.544103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.544137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.544241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.544277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.544383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.544418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.544552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.544587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.544716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.544751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.544868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.374 [2024-11-17 02:57:49.544904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.374 qpair failed and we were unable to recover it. 00:37:41.374 [2024-11-17 02:57:49.545069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.545113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.545224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.545258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.545390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.545423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.545557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.545590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.545698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.545732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.545896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.545930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.546060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.546122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.546265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.546303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.546441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.546478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.546643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.546678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.546787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.546821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.546926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.546961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.547124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.547161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.547289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.547337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.547444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.547478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.547613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.547647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.547808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.547842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.547982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.548020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.548139] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:37:41.375 [2024-11-17 02:57:49.548174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.548211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.548255] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:41.375 [2024-11-17 02:57:49.548351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.548417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.548587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.548621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.548733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.548767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.548907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.548943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.549111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.549159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.549317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.375 [2024-11-17 02:57:49.549365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.375 qpair failed and we were unable to recover it. 00:37:41.375 [2024-11-17 02:57:49.549518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.549554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.549715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.549749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.549910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.549944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.550048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.550082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.550218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.550266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.550411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.550448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.550579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.550613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.550722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.550754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.550902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.550935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.551046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.551079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.551221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.551257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.551386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.551435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.551560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.551597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.551731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.551765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.551860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.551894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.552040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.552075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.552222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.552257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.552398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.552434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.552569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.552603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.552743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.552777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.552899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.552932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.553037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.553069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.553229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.553266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.553377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.553414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.553577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.553610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.553737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.553771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.553876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.553911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.554071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.554113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.554255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.554300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.554415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.554451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.554581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.554615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.554747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.554782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.554893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.554927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.555032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.555073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.376 qpair failed and we were unable to recover it. 00:37:41.376 [2024-11-17 02:57:49.555216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.376 [2024-11-17 02:57:49.555250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.555388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.555422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.555582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.555616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.555761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.555795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.555930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.555966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.556074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.556113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.556220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.556253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.556418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.556452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.556592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.556627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.556725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.556766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.556900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.556935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.557057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.557114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.557278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.557325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.557484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.557521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.557656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.557690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.557835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.557868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.557978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.558026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.558155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.558192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.558302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.558336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.558500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.558534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.558635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.558670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.558851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.558900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.559057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.559094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.559205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.559238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.559334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.559366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.559501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.559533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.559649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.559681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.559789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.559824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.559977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.560026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.560182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.560221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.560361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.560396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.560530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.560565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.560703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.377 [2024-11-17 02:57:49.560737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.377 qpair failed and we were unable to recover it. 00:37:41.377 [2024-11-17 02:57:49.560843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.560877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.560981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.561017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.561147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.561196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.561315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.561352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.561472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.561508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.561647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.561682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.561784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.561825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.561991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.562039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.562167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.562204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.562312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.562346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.562514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.562548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.562686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.562720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.562869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.562917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.563031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.563067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.563197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.563245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.563394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.563429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.563562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.563596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.563732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.563767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.563874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.563908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.564077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.564126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.564263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.564311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.564487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.564524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.564658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.564693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.564800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.564836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.564955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.565003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.565165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.565201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.565332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.565380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.565563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.565599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.565712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.565748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.565858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.565893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.566006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.566043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.566195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.566233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.566345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.378 [2024-11-17 02:57:49.566378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.378 qpair failed and we were unable to recover it. 00:37:41.378 [2024-11-17 02:57:49.566548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.566581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.566691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.566725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.566856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.566888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.567036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.567069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.567247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.567297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.567425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.567489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.567629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.567666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.567804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.567839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.567971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.568005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.568131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.568180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.568320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.568355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.568483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.568516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.568626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.568660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.568774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.568812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.568985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.569019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.569137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.569175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.569280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.569315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.569451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.569487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.569621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.569656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.569803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.569850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.570003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.570052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.570209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.570245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.570382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.570415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.570526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.570561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.570685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.570718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.570846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.570880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.570980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.571014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.571156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.571191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.571331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.571368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.571470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.571505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.571631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.571665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.571792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.571827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.571952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.379 [2024-11-17 02:57:49.571986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.379 qpair failed and we were unable to recover it. 00:37:41.379 [2024-11-17 02:57:49.572113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.572162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.572301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.572335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.572450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.572483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.572591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.572625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.572787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.572821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.572953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.572986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.573092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.573133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.573273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.573311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.573458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.573506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.573655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.573692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.573797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.573831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.573994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.574029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.574187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.574222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.574351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.574385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.574522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.574556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.574664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.574697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.574799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.574833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.574996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.575033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.575224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.575273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.575396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.575430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.575585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.575625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.575764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.575799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.575892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.575926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.576089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.576132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.576261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.576310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.576487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.576526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.576643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.576679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.576818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.576853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.576957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.380 [2024-11-17 02:57:49.576992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.380 qpair failed and we were unable to recover it. 00:37:41.380 [2024-11-17 02:57:49.577112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.577147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.577276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.577311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.577434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.577468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.577626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.577674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.577777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.577813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.577975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.578014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.578125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.578162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.578271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.578307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.578418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.578452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.578614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.578649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.578759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.578793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.578902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.578937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.579039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.579076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.579218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.579252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.579364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.579401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.579538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.579574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.579684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.579719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.579835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.579870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.580008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.580045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.580191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.580226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.580364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.580397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.580531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.580565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.580701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.580737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.580872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.580907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.581072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.581124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.581295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.581343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.581533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.581568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.581675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.581709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.581874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.581908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.582008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.582041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.582178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.582227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.582414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.582468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.582609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.381 [2024-11-17 02:57:49.582645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.381 qpair failed and we were unable to recover it. 00:37:41.381 [2024-11-17 02:57:49.582765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.582799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.582904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.582939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.583117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.583152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.583251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.583286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.583428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.583466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.583635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.583671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.583810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.583845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.583952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.583986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.584132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.584180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.584335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.584384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.584511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.584547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.584711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.584745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.584887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.584921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.585043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.585092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.585224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.585261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.585448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.585486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.585592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.585627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.585759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.585793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.585907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.585942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.586050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.586086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.586247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.586296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.586444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.586479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.586584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.586618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.586737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.586771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.586872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.586907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.587060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.587119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.587241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.587277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.587396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.587435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.587575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.382 [2024-11-17 02:57:49.587610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.382 qpair failed and we were unable to recover it. 00:37:41.382 [2024-11-17 02:57:49.587739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.587773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.587906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.587939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.588111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.588148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.588294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.588343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.588452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.588488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.588623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.588665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.588782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.588816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.588969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.589003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.589152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.589187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.589343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.589405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.589560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.589599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.589723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.589772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.589892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.589926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.590029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.590063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.590181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.590215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.590348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.590382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.590513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.590547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.590655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.590689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.590799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.590838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.590978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.591012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.591174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.591223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.591373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.591407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.591519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.591553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.591697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.591731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.591841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.591873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.592023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.592057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.592206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.592243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.592361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.592400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.592537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.592572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.592721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.592756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.592867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.592901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.593073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.593135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.593277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.593312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.383 [2024-11-17 02:57:49.593419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.383 [2024-11-17 02:57:49.593453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.383 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.593616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.593651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.593788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.593823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.593964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.594005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.594143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.594191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.594336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.594373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.594506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.594540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.594637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.594672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.594804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.594838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.594975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.595011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.595170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.595218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.595351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.595400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.595567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.595601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.595711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.595745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.595852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.595885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.595992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.596026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.596162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.596198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.596314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.596352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.596497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.596534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.596669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.596704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.596849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.596883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.596993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.597028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.597182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.597218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.597359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.597405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.597538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.597573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.597708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.597743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.597850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.597884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.598024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.598059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.598194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.598242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.598387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.598424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.598567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.384 [2024-11-17 02:57:49.598602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.384 qpair failed and we were unable to recover it. 00:37:41.384 [2024-11-17 02:57:49.598755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.598790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.598954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.599001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.599130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.599166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.599273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.599307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.599441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.599489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.599660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.599697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.599849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.599897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.600008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.600043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.600198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.600247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.600390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.600427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.600593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.600628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.600726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.600760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.600871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.600912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.601044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.601089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.601206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.601241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.601383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.601427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.601538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.601573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.601707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.601743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.601851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.601887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.602010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.602058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.602257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.602293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.602431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.602465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.602569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.602603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.602704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.602738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.602848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.602883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.603047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.603082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.603252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.603301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.603439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.603475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.385 [2024-11-17 02:57:49.603606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.385 [2024-11-17 02:57:49.603640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.385 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.603766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.603800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.603927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.603961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.604122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.604157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.604298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.604332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.604436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.604469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.604579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.604614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.604751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.604786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.604924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.604959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.605070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.605117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.605251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.605285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.605408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.605442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.605541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.605586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.605684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.605718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.605825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.605859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.606009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.606059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.606207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.606244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.606378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.606412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.606545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.606579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.606715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.606749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.606852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.606886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.607030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.607065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.607198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.607248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.607402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.607450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.607604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.607645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.607778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.607824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.607984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.608018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.386 [2024-11-17 02:57:49.608131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.386 [2024-11-17 02:57:49.608165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.386 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.608299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.608332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.608494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.608528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.608674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.608712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.608852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.608889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.609047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.609092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.609218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.609252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.609355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.609389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.609503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.609537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.609691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.609727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.609838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.609872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.610014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.610047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.610196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.610231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.610387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.610435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.610542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.610578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.610709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.610743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.610854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.610889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.611037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.611086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.611244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.611280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.611412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.611445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.611575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.611609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.611743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.611778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.611888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.611923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.612065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.612122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.612294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.612343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.612488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.612525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.612636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.612671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.612779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.612813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.612967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.613016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.613180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.613216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.613361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.613405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.613544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.613579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.613715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.613751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.387 qpair failed and we were unable to recover it. 00:37:41.387 [2024-11-17 02:57:49.613891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.387 [2024-11-17 02:57:49.613927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.614057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.614091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.614218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.614252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.614364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.614399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.614538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.614577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.614755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.614803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.614906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.614942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.615046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.615081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.615193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.615229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.615364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.615399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.615503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.615538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.615648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.615684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.615854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.615892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.616002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.616036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.616205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.616241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.616351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.616388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.616499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.616533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.616664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.616699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.616839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.616874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.616982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.617017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.617161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.617196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.617334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.617369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.617519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.617554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.617714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.617749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.617855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.617890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.618031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.618066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.618205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.618253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.618374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.618409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.618514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.618549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.618662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.618696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.618823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.618857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.619026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.619060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.388 [2024-11-17 02:57:49.619206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.388 [2024-11-17 02:57:49.619242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.388 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.619356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.619394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.619558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.619592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.619727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.619761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.619921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.619955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.620071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.620130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.620275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.620310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.620448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.620484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.620622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.620656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.620757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.620791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.620953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.620988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.621110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.621159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.621304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.621349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.621476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.621511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.621651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.621686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.621822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.621857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.621963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.621998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.622117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.622153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.622270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.622318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.622435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.622470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.622590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.622625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.622758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.622792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.622899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.622933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.623043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.623080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.623259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.623294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.623442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.623490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.623618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.623656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.623764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.623800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.623915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.623949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.624055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.624091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.624279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.624328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.624465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.624502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.624613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.624648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.389 [2024-11-17 02:57:49.624822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.389 [2024-11-17 02:57:49.624858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.389 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.624972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.625009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.625173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.625208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.625359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.625407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.625554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.625590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.625753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.625787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.625902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.625938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.626054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.626090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.626233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.626268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.626431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.626465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.626573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.626608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.626745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.626779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.626877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.626912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.627055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.627091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.627236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.627270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.627398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.627433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.627538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.627573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.627676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.627711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.627839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.627873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.628000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.628054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.628217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.628266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.628382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.628418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.628581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.628616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.628724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.628758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.628914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.628948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.629086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.629131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.629252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.629300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.629441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.629480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.629628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.629663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.629828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.629863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.629991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.630025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.630146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.630195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.630374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.630422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.630572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.390 [2024-11-17 02:57:49.630609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.390 qpair failed and we were unable to recover it. 00:37:41.390 [2024-11-17 02:57:49.630714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.630749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.630848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.630883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.631004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.631053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.631174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.631209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.631341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.631376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.631484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.631520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.631689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.631723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.631885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.631919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.632020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.632054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.632201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.632239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.632382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.632422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.632544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.632600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.632732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.632767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.632906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.632941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.633050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.633086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.633227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.633262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.633375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.633409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.633549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.633583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.633716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.633749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.633866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.633913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.634048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.634084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.634230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.634264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.634400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.634433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.634531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.634565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.634722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.634756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.634868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.634908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.635027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.635062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.635183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.635219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.391 qpair failed and we were unable to recover it. 00:37:41.391 [2024-11-17 02:57:49.635354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.391 [2024-11-17 02:57:49.635388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.635525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.635559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.635698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.635732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.635893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.635928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.636056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.636112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.636249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.636297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.636419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.636456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.636566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.636611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.636770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.636804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.636937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.636971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.637092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.637160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.637324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.637373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.637498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.637537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.637678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.637713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.637818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.637853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.637992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.638028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.638147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.638183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.638294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.638329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.638430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.638464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.638565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.638600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.638706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.638741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.638839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.638873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.639012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.639047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.639213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.639261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.639389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.639438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.639580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.639615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.639723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.639758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.639914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.639948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.640076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.640120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.640221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.640255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.640365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.640402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.640515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.640549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.640722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.640759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.392 [2024-11-17 02:57:49.640865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.392 [2024-11-17 02:57:49.640900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.392 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.641041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.641080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.641256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.641291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.641396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.641430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.641561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.641600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.641742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.641775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.641904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.641940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.642112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.642147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.642266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.642315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.642463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.642501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.642634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.642670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.642833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.642868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.642972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.643007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.643162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.643211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.643355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.643392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.643499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.643534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.643659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.643693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.643857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.643892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.644037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.644072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.644196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.644231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.644371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.644405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.644539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.644573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.644674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.644707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.644848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.644881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.645026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.645062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.645217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.645253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.645393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.645442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.645560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.645597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.645735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.645771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.393 qpair failed and we were unable to recover it. 00:37:41.393 [2024-11-17 02:57:49.645912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.393 [2024-11-17 02:57:49.645947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.646104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.646153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.646292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.646328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.646442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.646476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.646610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.646644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.646802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.646839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.646949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.646985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.647126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.647165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.647290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.647328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.647492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.647541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.647715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.647750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.647866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.647901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.648033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.648068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.648192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.648227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.648387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.648421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.648556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.648595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.648712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.648745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.648882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.648916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.649043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.649077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.649197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.649231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.652197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.652249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.652402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.652439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.652554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.652588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.652691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.652726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.652854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.652888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.653020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.653054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.653163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.653198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.653316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.653350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.653488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.653521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.653678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.653711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.653846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.653895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.394 [2024-11-17 02:57:49.654069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.394 [2024-11-17 02:57:49.654133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.394 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.654252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.654287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.654401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.654435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.654566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.654599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.654728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.654762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.654894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.654927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.655080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.655140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.655255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.655292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.655431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.655467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.655603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.655639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.655748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.655783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.655918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.655954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.656063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.656107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.656244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.656293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.656415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.656450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.656595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.656629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.656788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.656821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.656923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.656957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.657064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.657109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.657241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.657289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.657462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.657508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.657624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.657661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.657823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.657857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.657964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.657999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.658136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.658182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.658313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.658347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.658456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.658491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.658630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.658665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.658822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.658856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.658968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.659002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.659165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.659200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.659309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.659345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.659453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.659488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.659630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.659664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.395 [2024-11-17 02:57:49.659776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.395 [2024-11-17 02:57:49.659809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.395 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.659918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.659954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.660075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.660132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.660248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.660284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.660446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.660494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.660634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.660670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.660775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.660809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.660943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.660976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.661088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.661129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.661233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.661268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.661406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.661441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.661582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.661616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.661722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.661758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.661916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.661951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.662071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.662137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.662293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.662341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.662522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.662557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.662699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.662733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.662867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.662902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.663038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.663073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.663234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.663283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.663443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.663491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.663632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.663667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.663770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.663804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.663939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.663973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.664125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.664175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.664293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.664328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.664457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.664492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.664628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.664662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.664796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.664831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.664967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.665006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.665142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.665178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.665297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.396 [2024-11-17 02:57:49.665334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.396 qpair failed and we were unable to recover it. 00:37:41.396 [2024-11-17 02:57:49.665479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.665513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.665673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.665706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.665803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.665837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.665971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.666004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.666131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.666165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.666272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.666306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.666437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.666486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.666621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.666658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.666825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.666872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.666979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.667015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.667180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.667229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.667372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.667420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.667538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.667574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.667708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.667741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.667876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.667910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.668042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.668076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.668198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.668232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.668370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.668408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.668550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.668588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.668718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.668767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.668906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.668941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.669054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.669092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.669215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.669250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.669362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.669396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.669506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.669540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.669680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.669716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.669830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.669869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.670014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.670048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.397 [2024-11-17 02:57:49.670166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.397 [2024-11-17 02:57:49.670200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.397 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.670308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.670340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.670520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.670555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.670666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.670712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.670828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.670865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.670999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.671033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.671176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.671211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.671322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.671357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.671459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.671493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.671596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.671636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.671798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.671832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.671965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.672013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.672166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.672214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.672351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.672388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.672520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.672553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.672664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.672696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.672830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.672864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.672995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.673031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.673145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.673182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.673290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.673324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.673451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.673485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.673622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.673656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.673786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.673821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.673989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.674024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.674222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.674271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.674402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.674450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.674590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.674626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.674756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.674790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.674884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.674917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.675027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.675062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.675225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.675274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.675420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.675459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.675595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.675630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.675768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.398 [2024-11-17 02:57:49.675803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.398 qpair failed and we were unable to recover it. 00:37:41.398 [2024-11-17 02:57:49.675965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.675999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.676155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.676204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.676344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.676380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.676508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.676542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.676648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.676683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.676801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.676836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.676944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.676978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.677086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.677129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.677233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.677269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.677391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.677437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.677553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.677590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.677699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.677733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.677861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.677895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.678031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.678066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.678193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.678241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.678361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.678403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.678564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.678599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.678701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.678736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.678867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.678902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.679055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.679112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.679262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.679297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.679484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.679532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.679680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.679716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.679827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.679861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.679990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.680024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.680125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.680161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.680295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.680328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.680454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.680488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.680599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.680638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.680773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.680807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.680912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.680945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.681049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.681086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.681252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.681301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.681430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.681478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.681632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.681667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.399 [2024-11-17 02:57:49.681797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.399 [2024-11-17 02:57:49.681830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.399 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.681965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.681999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.682115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.682148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.682254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.682288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.682430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.682464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.682569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.682603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.682773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.682807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.682912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.682946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.683059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.683106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.683249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.683284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.683396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.683431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.683572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.683606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.683706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.683739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.683879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.683913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.684071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.684115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.684261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.684295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.684413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.684460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.684632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.684669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.684833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.684867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.684995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.685029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.685146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.685181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.685323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.685357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.685514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.685550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.685712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.685745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.685877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.685911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.686011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.686044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.686197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.686230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.686386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.686429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.686533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.686566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.686677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.686711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.686821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.686855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.686982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.687015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.687122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.687156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.687288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.687321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.687436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.400 [2024-11-17 02:57:49.687468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.400 qpair failed and we were unable to recover it. 00:37:41.400 [2024-11-17 02:57:49.687581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.687615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.687730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.687765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.687873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.687906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.688046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.688080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.688220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.688254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.688383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.688416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.688558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.688592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.688728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.688762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.688916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.688949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.689108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.689158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.689305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.689342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.689503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.689551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.689698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.689737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.689876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.689909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.690052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.690090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.690261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.690294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.690431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.690463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.690570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.690603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.690736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.690770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.690882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.690915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.691069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.691128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.691274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.691311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.691427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.691462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.691599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.691634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.691769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.691803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.691903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.691937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.692077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.692121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.692252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.692286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.692392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.692425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.692560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.692594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.692725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.692758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.692912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.692946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.401 [2024-11-17 02:57:49.693045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.401 [2024-11-17 02:57:49.693081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.401 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.693203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.693237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.693341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.693376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.693483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.693518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.693703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.693752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.693861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.693897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.694039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.694073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.694236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.694270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.694404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.694438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.694581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.694616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.694749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.694796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.694955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.694989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.695128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.695165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.695316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.695364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.695516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.695551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.695711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.695745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.695870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.695905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.696005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.696039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.696188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.696223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.696324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.696358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.696488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.696527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.696659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.696692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.696801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.696836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.696977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.697010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.697119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.697153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.697289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.697324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.697457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.697490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.697621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.697654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.697780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.697814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.697965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.697998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.698144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.698179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.698285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.698319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.402 [2024-11-17 02:57:49.698427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.402 [2024-11-17 02:57:49.698461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.402 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.698560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.698593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.698698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.698731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.698904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.698953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.699088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.699143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.699269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.699317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.699479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.699514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.699648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.699681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.699825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.699859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.699990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.700023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.700195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.700234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.700344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.700383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.700527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.700562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.700668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.700702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.700837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.700874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.700998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.701037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.701183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.701217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.701326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.701359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.701469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.701503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.701607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.701640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.701771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.701805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.701947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.701984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.702140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.702189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.702340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.702375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.702480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.702513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.702645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.702678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.702809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.702842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.702950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.702985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.703104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.703146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.703262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.703296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.703410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.703445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.703584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.703618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.703755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.703793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.403 qpair failed and we were unable to recover it. 00:37:41.403 [2024-11-17 02:57:49.703894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.403 [2024-11-17 02:57:49.703929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.704063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.704105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.704217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.704250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.704357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.704403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.704507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.704540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.704649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.704685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.704825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.704861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.705007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.705055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.705202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.705237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.705377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.705410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.705519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.705553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.705691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.705724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.705829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.705865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.706030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.706064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.706189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.706228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.706359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.706394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.706547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.706580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.706714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.706747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.706882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.706915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.707041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.707089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.707254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.707303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.707458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.707506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.707618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.707655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.707755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.707789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.707917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.707951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.708060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.708102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.708222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.708271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.708381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.708427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.708563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.708598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.708727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.708762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.708908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.708941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.709094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.709140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.709247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.709280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.709393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.709432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.709581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.709618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.404 [2024-11-17 02:57:49.709766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.404 [2024-11-17 02:57:49.709808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.404 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.709916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.709950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.710083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.710124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.710233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.710266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.710365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.710398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.710534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.710568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.710699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.710732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.710847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.710884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.711034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.711082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.711206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.711243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.711345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.711379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.405 [2024-11-17 02:57:49.711376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.711508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.711543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.711678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.711712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.711819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.711859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.712002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.712037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.712169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.712204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.712305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.712340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.712493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.712527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.712636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.712671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.712779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.712814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.712919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.712953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.713093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.713138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.713244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.713278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.713423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.713458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.713600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.713634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.713770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.713805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.713935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.713969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.714089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.714131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.714269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.714302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.714462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.714496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.714591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.714624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.714751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.714785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.714948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.714984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.715090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.715132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.715275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.715309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.715447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.715482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.715585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.405 [2024-11-17 02:57:49.715619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.405 qpair failed and we were unable to recover it. 00:37:41.405 [2024-11-17 02:57:49.715755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.715791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.715957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.715992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.716091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.716132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.716287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.716335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.716466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.716502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.716640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.716676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.716790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.716823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.716965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.717000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.717114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.717150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.717312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.717347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.717486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.717520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.717650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.717684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.717814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.717849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.717971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.718020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.718141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.718177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.718331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.718383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.718525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.718567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.718731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.718765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.718904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.718938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.719089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.719132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.719258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.719306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.719452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.719488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.719631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.719667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.719798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.719833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.720003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.720039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.720195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.720231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.720369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.720410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.720553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.720587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.720716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.720750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.720853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.720886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.721039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.721088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.721209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.721244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.721370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.721418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.721562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.721597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.721697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.406 [2024-11-17 02:57:49.721731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.406 qpair failed and we were unable to recover it. 00:37:41.406 [2024-11-17 02:57:49.721896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.721930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.722107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.722142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.722285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.722323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.722436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.722470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.722609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.722645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.722749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.722783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.722904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.722953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.723111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.723146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.723284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.723320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.723464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.723499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.723617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.723668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.723869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.723904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.724005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.724072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.724232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.724271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.724417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.724452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.724588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.724622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.724720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.724755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.724856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.724891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.725067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.725126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.725238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.725274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.725410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.725445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.725588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.725629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.725730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.725764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.725885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.725933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.726084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.726130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.726261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.726296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.726437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.726471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.726632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.407 [2024-11-17 02:57:49.726666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.407 qpair failed and we were unable to recover it. 00:37:41.407 [2024-11-17 02:57:49.726801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.726834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.726934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.726969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.727121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.727160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.727309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.727357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.727513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.727550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.727708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.727742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.727877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.727911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.728027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.728062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.728224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.728260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.728409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.728457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.728607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.728643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.728775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.728810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.728944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.728978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.729124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.729159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.729291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.729324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.729455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.729489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.729598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.729632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.729765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.729799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.729953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.729989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.730140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.730187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.730298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.730333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.730475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.730509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.730629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.730663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.730825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.730860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.731003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.731037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.731149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.731183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.731279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.731313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.731416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.731450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.731581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.731615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.731748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.731782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.731926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.731974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.732107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.732156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.732306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.732344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.732463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.732506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.732640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.732674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.732801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.732836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.732960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.733009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.733171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.733210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.733326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.408 [2024-11-17 02:57:49.733361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.408 qpair failed and we were unable to recover it. 00:37:41.408 [2024-11-17 02:57:49.733498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.733533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.733661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.733695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.733814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.733851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.734001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.734037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.734154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.734188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.734292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.734326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.734461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.734494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.734593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.734627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.734734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.734770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.734904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.734940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.735089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.735144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.735263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.735298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.735436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.735469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.735603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.735637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.735771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.735805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.735945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.735981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.736139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.736189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.736306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.736343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.736504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.736539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.736664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.736697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.736835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.736869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.737008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.737043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.737204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.737240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.737428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.737476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.737614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.737651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.737787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.737822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.737958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.737992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.738153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.738201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.738358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.738407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.738523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.738558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.738677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.738712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.738822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.738857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.738961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.738994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.739106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.739143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.739276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.739316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.739427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.739461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.739591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.739625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.739741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.739776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.409 [2024-11-17 02:57:49.739925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.409 [2024-11-17 02:57:49.739973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.409 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.740118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.740155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.740309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.740360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.740497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.740543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.740676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.740711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.740823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.740857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.740989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.741023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.741218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.741266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.741390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.741425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.741537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.741571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.741679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.741712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.741848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.741881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.741983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.742017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.742134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.742172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.742307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.742345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.742460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.742496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.742637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.742673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.742815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.742850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.743002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.743037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.743185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.743220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.743356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.743402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.743541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.743577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.743679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.743715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.743855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.743891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.744026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.744061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.744209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.744245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.744396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.744432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.744596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.744631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.744735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.744770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.744901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.744936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.745110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.745145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.745282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.745318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.745435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.745469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.745601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.745635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.745740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.745774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.745915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.745952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.746055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.746112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.746222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.746258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.746375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.746421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.746560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.746594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.746730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.746764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.746868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.746904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.747052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.747110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.747273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.747322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.410 [2024-11-17 02:57:49.747476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.410 [2024-11-17 02:57:49.747511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.410 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.747614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.747648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.747769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.747803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.747968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.748002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.748187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.748236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.748364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.748419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.748559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.748595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.748724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.748758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.748897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.748931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.749037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.749071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.749249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.749285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.749433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.749489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.749656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.749690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.749796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.749831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.749957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.750006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.750168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.750205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.750323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.750372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.750493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.750529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.750661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.750695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.750817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.750865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.751018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.751067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.751228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.751277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.751430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.751474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.751581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.751616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.751755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.751789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.751904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.751937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.752066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.752117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.752248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.752281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.752449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.752495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.752600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.752635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.752815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.752864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.753010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.753044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.753221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.753276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.753393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.753429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.753542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.753577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.753713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.411 [2024-11-17 02:57:49.753748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.411 qpair failed and we were unable to recover it. 00:37:41.411 [2024-11-17 02:57:49.753851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.753886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.754030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.754069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.754221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.754256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.754375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.754429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.754572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.754608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.754741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.754776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.754881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.754917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.755075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.755134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.755265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.755299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.755414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.755449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.755598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.755631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.755728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.755761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.755933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.755967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.756091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.756147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.756266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.756304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.756482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.756518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.756684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.756718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.756847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.756881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.756990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.757026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.757185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.757222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.757343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.757392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.757537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.757573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.757711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.757745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.757857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.757891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.758024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.758057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.758222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.758270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.758390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.758428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.758570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.412 [2024-11-17 02:57:49.758608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.412 qpair failed and we were unable to recover it. 00:37:41.412 [2024-11-17 02:57:49.758741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.758775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.758885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.758921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.759031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.759066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.759191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.759240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.759343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.759379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.759551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.759585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.759738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.759772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.759876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.759912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.760020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.760057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.760203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.760253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.760404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.760441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.760559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.760595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.760704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.760740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.760846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.760881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.760987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.761023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.761182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.761218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.761345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.761379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.761500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.761534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.761694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.761728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.761829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.761865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.761979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.762026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.762181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.762217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.762329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.762365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.762506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.762540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.762663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.762698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.762807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.762841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.762951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.762986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.763102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.763142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.763310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.763346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.763468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.763517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.763688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.763723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.763823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.763858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.763965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.764000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.764141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.764175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.764285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.764319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.764460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.764498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.764629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.764663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.764812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.764860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.765003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.765038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.765197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.765236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.765368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.765415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.765580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.765615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.413 qpair failed and we were unable to recover it. 00:37:41.413 [2024-11-17 02:57:49.765779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.413 [2024-11-17 02:57:49.765813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.765919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.765954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.766117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.766165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.766281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.766317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.766459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.766494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.766597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.766630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.766725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.766758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.766867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.766903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.767005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.767040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.767182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.767231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.767385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.767421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.767552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.767586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.767715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.767749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.767859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.767893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.768072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.768140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.768259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.768295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.768422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.768457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.768559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.768593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.768689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.768722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.768891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.768939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.769114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.769151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.769277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.769317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.769450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.769516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.769679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.769715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.769851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.769885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.769999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.770034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.770203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.770251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.770358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.770403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.770517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.770551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.770668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.770717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.770891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.770925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.771040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.771084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.771206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.771242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.771345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.771387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.771499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.771548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.771689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.771725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.771920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.771956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.772069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.772118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.772264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.772313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.772466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.772514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.772624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.772659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.414 qpair failed and we were unable to recover it. 00:37:41.414 [2024-11-17 02:57:49.772790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.414 [2024-11-17 02:57:49.772824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.772955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.772989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.773125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.773162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.773296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.773332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.773513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.773560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.773713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.773748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.773893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.773929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.774066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.774117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.774255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.774289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.774419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.774452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.774593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.774627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.774785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.774818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.774954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.774989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.775127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.775162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.775273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.775306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.775470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.775505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.775609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.775642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.775756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.775789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.775928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.775963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.776081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.776141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.776264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.776301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.776442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.776476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.776572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.776605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.776707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.776741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.776846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.776880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.776977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.777010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.777177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.777226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.777341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.777376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.777486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.777522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.777661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.777695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.777798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.777833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.777995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.778029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.778132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.778172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.778281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.778314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.778414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.778447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.778580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.778613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.778726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.778766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.778881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.778916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.779056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.779091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.779210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.779243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.779370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.779404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.779545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.415 [2024-11-17 02:57:49.779579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.415 qpair failed and we were unable to recover it. 00:37:41.415 [2024-11-17 02:57:49.779678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.779712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.779856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.779894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.780030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.780065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.780222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.780256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.780359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.780393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.780527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.780561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.780663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.780697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.780843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.780878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.780985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.781021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.781160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.781197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.781307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.781342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.781501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.781535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.781635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.781671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.781770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.781804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.781924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.781972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.782083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.782125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.782268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.782303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.782425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.782460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.782569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.782602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.782723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.782761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.782899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.782936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.783115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.783163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.783283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.783318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.783458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.783494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.783630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.783665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.783801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.783835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.783957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.783992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.784136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.784185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.784303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.784338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.784444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.784478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.784612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.784651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.784811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.784858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.784978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.785014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.785179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.785215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.416 qpair failed and we were unable to recover it. 00:37:41.416 [2024-11-17 02:57:49.785346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.416 [2024-11-17 02:57:49.785381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.785557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.785593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.785728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.785776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.785889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.785925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.786076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.786132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.786253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.786290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.786402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.786437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.786593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.786628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.786757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.786792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.786897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.786934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.787102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.787152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.787335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.787383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.787529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.787564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.787699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.787733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.787865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.787900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.788000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.788034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.788180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.788217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.788372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.788420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.788537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.788573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.788677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.788711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.788821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.788857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.789044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.789082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.789226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.789260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.789372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.789406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.789512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.789545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.789711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.789746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.789883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.789918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.790055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.790091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.790217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.790255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.790365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.790401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.790538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.790573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.790684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.790718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.790900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.790942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.791068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.791122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.791247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.791294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.791440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.791476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.791619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.791658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.791789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.791823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.791962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.792002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.792149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.792193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.792324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.792366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.792512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.792547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.792687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.792720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.792827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.792864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.417 [2024-11-17 02:57:49.793017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.417 [2024-11-17 02:57:49.793066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.417 qpair failed and we were unable to recover it. 00:37:41.418 [2024-11-17 02:57:49.793227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.418 [2024-11-17 02:57:49.793262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.418 qpair failed and we were unable to recover it. 00:37:41.418 [2024-11-17 02:57:49.793383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.418 [2024-11-17 02:57:49.793418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.418 qpair failed and we were unable to recover it. 00:37:41.418 [2024-11-17 02:57:49.793516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.418 [2024-11-17 02:57:49.793551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.418 qpair failed and we were unable to recover it. 00:37:41.418 [2024-11-17 02:57:49.793652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.418 [2024-11-17 02:57:49.793687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.418 qpair failed and we were unable to recover it. 00:37:41.418 [2024-11-17 02:57:49.793791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.418 [2024-11-17 02:57:49.793826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.418 qpair failed and we were unable to recover it. 00:37:41.418 [2024-11-17 02:57:49.793967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.418 [2024-11-17 02:57:49.794001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.418 qpair failed and we were unable to recover it. 00:37:41.418 [2024-11-17 02:57:49.794149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.418 [2024-11-17 02:57:49.794188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.418 qpair failed and we were unable to recover it. 00:37:41.418 [2024-11-17 02:57:49.794298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.418 [2024-11-17 02:57:49.794331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.418 qpair failed and we were unable to recover it. 00:37:41.418 [2024-11-17 02:57:49.794441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.418 [2024-11-17 02:57:49.794478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.418 qpair failed and we were unable to recover it. 00:37:41.679 [2024-11-17 02:57:49.794589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.679 [2024-11-17 02:57:49.794623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.679 qpair failed and we were unable to recover it. 00:37:41.679 [2024-11-17 02:57:49.794753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.679 [2024-11-17 02:57:49.794788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.679 qpair failed and we were unable to recover it. 00:37:41.679 [2024-11-17 02:57:49.794890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.679 [2024-11-17 02:57:49.794925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.679 qpair failed and we were unable to recover it. 00:37:41.679 [2024-11-17 02:57:49.795038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.679 [2024-11-17 02:57:49.795073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.679 qpair failed and we were unable to recover it. 00:37:41.679 [2024-11-17 02:57:49.795238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.679 [2024-11-17 02:57:49.795285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.679 qpair failed and we were unable to recover it. 00:37:41.679 [2024-11-17 02:57:49.795403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.679 [2024-11-17 02:57:49.795449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.679 qpair failed and we were unable to recover it. 00:37:41.679 [2024-11-17 02:57:49.795564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.679 [2024-11-17 02:57:49.795597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.679 qpair failed and we were unable to recover it. 00:37:41.679 [2024-11-17 02:57:49.795706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.679 [2024-11-17 02:57:49.795739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.679 qpair failed and we were unable to recover it. 00:37:41.679 [2024-11-17 02:57:49.795872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.679 [2024-11-17 02:57:49.795906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.679 qpair failed and we were unable to recover it. 00:37:41.679 [2024-11-17 02:57:49.796011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.679 [2024-11-17 02:57:49.796044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.679 qpair failed and we were unable to recover it. 00:37:41.679 [2024-11-17 02:57:49.796220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.679 [2024-11-17 02:57:49.796254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.679 qpair failed and we were unable to recover it. 00:37:41.679 [2024-11-17 02:57:49.796353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.679 [2024-11-17 02:57:49.796388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.679 qpair failed and we were unable to recover it. 00:37:41.679 [2024-11-17 02:57:49.796529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.679 [2024-11-17 02:57:49.796562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.679 qpair failed and we were unable to recover it. 00:37:41.679 [2024-11-17 02:57:49.796663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.679 [2024-11-17 02:57:49.796696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.679 qpair failed and we were unable to recover it. 00:37:41.679 [2024-11-17 02:57:49.796831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.679 [2024-11-17 02:57:49.796864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.679 qpair failed and we were unable to recover it. 00:37:41.679 [2024-11-17 02:57:49.796961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.679 [2024-11-17 02:57:49.796994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.679 qpair failed and we were unable to recover it. 00:37:41.679 [2024-11-17 02:57:49.797193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.797242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.797394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.797431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.797564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.797599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.797762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.797797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.797931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.797965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.798101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.798136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.798243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.798282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.798423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.798460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.798600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.798637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.798784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.798819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.798956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.798990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.799117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.799153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.799262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.799299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.799418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.799453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.799588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.799622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.799783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.799818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.799950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.799984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.800115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.800150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.800251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.800285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.800420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.800455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.800631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.800671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.800794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.800829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.800938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.800971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.801077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.801121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.801260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.801293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.801426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.801459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.801556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.801589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.801688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.801721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.801863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.801906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.802022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.802058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.802213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.802248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.802385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.802420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.802525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.802561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.802668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.802702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.802818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.802867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.803007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.803042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 A controller has encountered a failure and is being reset. 00:37:41.680 [2024-11-17 02:57:49.803235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.803283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.680 qpair failed and we were unable to recover it. 00:37:41.680 [2024-11-17 02:57:49.803430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.680 [2024-11-17 02:57:49.803465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.803605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.803640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.803749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.803783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.803925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.803959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.804115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.804154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.804292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.804327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.804436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.804470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.804599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.804632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.804735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.804769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.804908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.804941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.805071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.805116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.805235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.805270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.805380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.805414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.805513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.805547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.805675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.805709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.805819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.805853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.805985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.806021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.806181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.806229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.806371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.806406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.806501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.806534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.806669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.806703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.806843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.806876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.806978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.807016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.807128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.807163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.807274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.807310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.807431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.807466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.807596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.807630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.807768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.807802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.807942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.807976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.808117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.808152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.808294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.808342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.808461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.808498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.808628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.808663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.808768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.808803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.808951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.808985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.809125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.809161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.809300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.809335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.809458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.681 [2024-11-17 02:57:49.809492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.681 qpair failed and we were unable to recover it. 00:37:41.681 [2024-11-17 02:57:49.809612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.809646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.809786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.809820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.809948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.809982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.810092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.810133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.810271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.810307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.810457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.810492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.810646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.810681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.810818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.810853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.810960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.810996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.811113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.811158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.811294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.811327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.811434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.811474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.811580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.811614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.811853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.811887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.812018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.812052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.812182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.812217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.812351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.812386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.812537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.812571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.812723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.812771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.812884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.812920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.813045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.813081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.813202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.813237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.813369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.813403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.813513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.813547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.813657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.813691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.813810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.813845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.813985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.814022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.814167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.814216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.814360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.814410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.814542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.814577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.814687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.814722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.814908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.814943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.815080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.815126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.815258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.815294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.815433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.815470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.815614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.815650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.815752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.815786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.682 [2024-11-17 02:57:49.815896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.682 [2024-11-17 02:57:49.815931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.682 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.816072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.816129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.816309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.816344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.816485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.816531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.816700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.816735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.816833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.816866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.817005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.817041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.817207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.817256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.817372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.817413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.817552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.817588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.817734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.817769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.817904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.817937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.818041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.818074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.818255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.818288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.818399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.818440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.818573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.818607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.818746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.818781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.818917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.818951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.819053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.819088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.819243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.819277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.819388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.819422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.819551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.819601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.819753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.819791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.819929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.819965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.820072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.820128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.820241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.820277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.820387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.820423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.820577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.820611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.820779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.820813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.820958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.820994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.821136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.821170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.821271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.821305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.821451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.821485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.821588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.821621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.821783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.821819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.821969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.822017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.822192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.822229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.822367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.822403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.683 [2024-11-17 02:57:49.822541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.683 [2024-11-17 02:57:49.822576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.683 qpair failed and we were unable to recover it. 00:37:41.684 [2024-11-17 02:57:49.822711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.684 [2024-11-17 02:57:49.822746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.684 qpair failed and we were unable to recover it. 00:37:41.684 [2024-11-17 02:57:49.822889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.684 [2024-11-17 02:57:49.822923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.684 qpair failed and we were unable to recover it. 00:37:41.684 [2024-11-17 02:57:49.823060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.684 [2024-11-17 02:57:49.823113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.684 qpair failed and we were unable to recover it. 00:37:41.684 [2024-11-17 02:57:49.823255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.684 [2024-11-17 02:57:49.823289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.684 qpair failed and we were unable to recover it. 00:37:41.684 [2024-11-17 02:57:49.823429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.684 [2024-11-17 02:57:49.823475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.684 qpair failed and we were unable to recover it. 00:37:41.684 [2024-11-17 02:57:49.823621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.684 [2024-11-17 02:57:49.823656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.684 qpair failed and we were unable to recover it. 00:37:41.684 [2024-11-17 02:57:49.823789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.684 [2024-11-17 02:57:49.823823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.684 qpair failed and we were unable to recover it. 00:37:41.684 [2024-11-17 02:57:49.823958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.684 [2024-11-17 02:57:49.823993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.684 qpair failed and we were unable to recover it. 00:37:41.684 [2024-11-17 02:57:49.824121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.684 [2024-11-17 02:57:49.824155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.684 qpair failed and we were unable to recover it. 00:37:41.684 [2024-11-17 02:57:49.824256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.684 [2024-11-17 02:57:49.824290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.684 qpair failed and we were unable to recover it. 00:37:41.684 [2024-11-17 02:57:49.824391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.684 [2024-11-17 02:57:49.824424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.684 qpair failed and we were unable to recover it. 00:37:41.684 [2024-11-17 02:57:49.824523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.684 [2024-11-17 02:57:49.824557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.684 qpair failed and we were unable to recover it. 00:37:41.684 [2024-11-17 02:57:49.824687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.684 [2024-11-17 02:57:49.824721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.684 qpair failed and we were unable to recover it. 00:37:41.684 [2024-11-17 02:57:49.824821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.684 [2024-11-17 02:57:49.824856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.684 qpair failed and we were unable to recover it. 00:37:41.684 [2024-11-17 02:57:49.824958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.684 [2024-11-17 02:57:49.824992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:41.684 qpair failed and we were unable to recover it. 00:37:41.684 [2024-11-17 02:57:49.825103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.684 [2024-11-17 02:57:49.825143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:41.684 qpair failed and we were unable to recover it. 00:37:41.684 [2024-11-17 02:57:49.825296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.684 [2024-11-17 02:57:49.825345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:41.684 qpair failed and we were unable to recover it. 00:37:41.684 [2024-11-17 02:57:49.825570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.684 [2024-11-17 02:57:49.825616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:41.684 [2024-11-17 02:57:49.825644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:37:41.684 [2024-11-17 02:57:49.825686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:37:41.684 [2024-11-17 02:57:49.825717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:37:41.684 [2024-11-17 02:57:49.825744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:37:41.684 [2024-11-17 02:57:49.825771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:37:41.684 Unable to reset the controller. 00:37:41.684 [2024-11-17 02:57:49.846917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:41.684 [2024-11-17 02:57:49.846980] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:41.684 [2024-11-17 02:57:49.847005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:41.684 [2024-11-17 02:57:49.847026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:41.684 [2024-11-17 02:57:49.847044] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:41.684 [2024-11-17 02:57:49.849692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:41.684 [2024-11-17 02:57:49.849748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:41.684 [2024-11-17 02:57:49.849799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:41.684 [2024-11-17 02:57:49.849804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:42.250 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:42.250 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:37:42.250 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:42.250 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:42.250 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.250 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:42.250 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:42.250 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.250 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.250 Malloc0 00:37:42.250 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.250 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:42.250 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.250 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.250 [2024-11-17 02:57:50.626642] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:42.250 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.250 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:42.251 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.251 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.251 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.251 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:42.251 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.251 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.251 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.251 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:42.251 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.251 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.251 [2024-11-17 02:57:50.656680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:42.251 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.251 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:42.251 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.251 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.251 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.251 02:57:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3142188 00:37:42.508 Controller properly reset. 00:37:47.773 Initializing NVMe Controllers 00:37:47.773 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:47.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:47.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:47.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:47.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:47.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:47.773 Initialization complete. Launching workers. 00:37:47.773 Starting thread on core 1 00:37:47.773 Starting thread on core 2 00:37:47.773 Starting thread on core 3 00:37:47.773 Starting thread on core 0 00:37:47.773 02:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:47.773 00:37:47.773 real 0m11.648s 00:37:47.773 user 0m36.596s 00:37:47.773 sys 0m7.602s 00:37:47.773 02:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:47.773 02:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.773 ************************************ 00:37:47.773 END TEST nvmf_target_disconnect_tc2 00:37:47.773 ************************************ 00:37:47.773 02:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:37:47.773 02:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:47.773 02:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:47.773 02:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:47.773 02:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:37:47.773 02:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:47.773 02:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:37:47.773 02:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:47.773 02:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:47.773 rmmod nvme_tcp 00:37:47.773 rmmod nvme_fabrics 00:37:47.773 rmmod nvme_keyring 00:37:47.773 02:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:47.773 02:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:37:47.773 02:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:37:47.773 02:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3142711 ']' 00:37:47.773 02:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3142711 00:37:47.773 02:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3142711 ']' 00:37:47.773 02:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3142711 00:37:47.773 02:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:37:47.773 02:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:47.773 02:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3142711 00:37:47.773 02:57:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:37:47.773 02:57:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:37:47.773 02:57:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3142711' 00:37:47.773 killing process with pid 3142711 00:37:47.773 02:57:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3142711 00:37:47.773 02:57:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3142711 00:37:49.148 02:57:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:49.148 02:57:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:49.148 02:57:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:49.148 02:57:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:37:49.148 02:57:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:37:49.148 02:57:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:49.148 02:57:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:37:49.148 02:57:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:49.148 02:57:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:49.148 02:57:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:49.148 02:57:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:49.148 02:57:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.051 02:57:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:51.051 00:37:51.051 real 0m17.647s 00:37:51.051 user 1m4.942s 00:37:51.051 sys 0m10.324s 00:37:51.051 02:57:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:51.051 02:57:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:51.051 ************************************ 00:37:51.051 END TEST nvmf_target_disconnect 00:37:51.051 ************************************ 00:37:51.051 02:57:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:51.051 00:37:51.051 real 7m39.012s 00:37:51.051 user 19m51.684s 00:37:51.051 sys 1m33.395s 00:37:51.051 02:57:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:51.051 02:57:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:51.051 ************************************ 00:37:51.051 END TEST nvmf_host 00:37:51.051 ************************************ 00:37:51.051 02:57:59 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:37:51.051 02:57:59 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:37:51.051 02:57:59 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:51.051 02:57:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:51.051 02:57:59 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:51.051 02:57:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:51.051 ************************************ 00:37:51.051 START TEST nvmf_target_core_interrupt_mode 00:37:51.051 ************************************ 00:37:51.051 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:51.051 * Looking for test storage... 00:37:51.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:37:51.051 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:51.051 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:37:51.051 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:51.051 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:51.051 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:51.051 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:51.051 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:51.051 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:37:51.051 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:37:51.051 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:37:51.051 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:37:51.051 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:37:51.051 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:37:51.051 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:37:51.051 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:51.051 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:37:51.051 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:37:51.051 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:51.051 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:51.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.052 --rc genhtml_branch_coverage=1 00:37:51.052 --rc genhtml_function_coverage=1 00:37:51.052 --rc genhtml_legend=1 00:37:51.052 --rc geninfo_all_blocks=1 00:37:51.052 --rc geninfo_unexecuted_blocks=1 00:37:51.052 00:37:51.052 ' 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:51.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.052 --rc genhtml_branch_coverage=1 00:37:51.052 --rc genhtml_function_coverage=1 00:37:51.052 --rc genhtml_legend=1 00:37:51.052 --rc geninfo_all_blocks=1 00:37:51.052 --rc geninfo_unexecuted_blocks=1 00:37:51.052 00:37:51.052 ' 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:51.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.052 --rc genhtml_branch_coverage=1 00:37:51.052 --rc genhtml_function_coverage=1 00:37:51.052 --rc genhtml_legend=1 00:37:51.052 --rc geninfo_all_blocks=1 00:37:51.052 --rc geninfo_unexecuted_blocks=1 00:37:51.052 00:37:51.052 ' 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:51.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.052 --rc genhtml_branch_coverage=1 00:37:51.052 --rc genhtml_function_coverage=1 00:37:51.052 --rc genhtml_legend=1 00:37:51.052 --rc geninfo_all_blocks=1 00:37:51.052 --rc geninfo_unexecuted_blocks=1 00:37:51.052 00:37:51.052 ' 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:51.052 ************************************ 00:37:51.052 START TEST nvmf_abort 00:37:51.052 ************************************ 00:37:51.052 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:51.311 * Looking for test storage... 00:37:51.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:51.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.311 --rc genhtml_branch_coverage=1 00:37:51.311 --rc genhtml_function_coverage=1 00:37:51.311 --rc genhtml_legend=1 00:37:51.311 --rc geninfo_all_blocks=1 00:37:51.311 --rc geninfo_unexecuted_blocks=1 00:37:51.311 00:37:51.311 ' 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:51.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.311 --rc genhtml_branch_coverage=1 00:37:51.311 --rc genhtml_function_coverage=1 00:37:51.311 --rc genhtml_legend=1 00:37:51.311 --rc geninfo_all_blocks=1 00:37:51.311 --rc geninfo_unexecuted_blocks=1 00:37:51.311 00:37:51.311 ' 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:51.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.311 --rc genhtml_branch_coverage=1 00:37:51.311 --rc genhtml_function_coverage=1 00:37:51.311 --rc genhtml_legend=1 00:37:51.311 --rc geninfo_all_blocks=1 00:37:51.311 --rc geninfo_unexecuted_blocks=1 00:37:51.311 00:37:51.311 ' 00:37:51.311 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:51.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.311 --rc genhtml_branch_coverage=1 00:37:51.311 --rc genhtml_function_coverage=1 00:37:51.311 --rc genhtml_legend=1 00:37:51.312 --rc geninfo_all_blocks=1 00:37:51.312 --rc geninfo_unexecuted_blocks=1 00:37:51.312 00:37:51.312 ' 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:37:51.312 02:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:53.214 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:53.214 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:53.214 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:53.215 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:53.215 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:53.215 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:53.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:53.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:37:53.473 00:37:53.473 --- 10.0.0.2 ping statistics --- 00:37:53.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:53.473 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:53.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:53.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:37:53.473 00:37:53.473 --- 10.0.0.1 ping statistics --- 00:37:53.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:53.473 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3145610 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:53.473 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3145610 00:37:53.474 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3145610 ']' 00:37:53.474 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:53.474 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:53.474 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:53.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:53.474 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:53.474 02:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:53.474 [2024-11-17 02:58:01.865827] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:53.474 [2024-11-17 02:58:01.868529] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:37:53.474 [2024-11-17 02:58:01.868635] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:53.732 [2024-11-17 02:58:02.018592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:53.732 [2024-11-17 02:58:02.156871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:53.732 [2024-11-17 02:58:02.156955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:53.732 [2024-11-17 02:58:02.156984] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:53.732 [2024-11-17 02:58:02.157005] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:53.732 [2024-11-17 02:58:02.157027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:53.732 [2024-11-17 02:58:02.159765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:53.732 [2024-11-17 02:58:02.159853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:53.732 [2024-11-17 02:58:02.159862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:54.330 [2024-11-17 02:58:02.536610] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:54.330 [2024-11-17 02:58:02.537734] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:54.330 [2024-11-17 02:58:02.538558] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:54.330 [2024-11-17 02:58:02.538900] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:54.588 [2024-11-17 02:58:02.860990] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:54.588 Malloc0 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:54.588 Delay0 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:54.588 [2024-11-17 02:58:02.989160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.588 02:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:37:54.846 [2024-11-17 02:58:03.186324] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:37:57.373 Initializing NVMe Controllers 00:37:57.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:57.373 controller IO queue size 128 less than required 00:37:57.373 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:37:57.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:37:57.373 Initialization complete. Launching workers. 00:37:57.373 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 21656 00:37:57.373 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 21717, failed to submit 66 00:37:57.373 success 21656, unsuccessful 61, failed 0 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:57.373 rmmod nvme_tcp 00:37:57.373 rmmod nvme_fabrics 00:37:57.373 rmmod nvme_keyring 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3145610 ']' 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3145610 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3145610 ']' 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3145610 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3145610 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3145610' 00:37:57.373 killing process with pid 3145610 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3145610 00:37:57.373 02:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3145610 00:37:58.748 02:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:58.748 02:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:58.748 02:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:58.748 02:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:37:58.748 02:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:37:58.748 02:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:58.748 02:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:37:58.748 02:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:58.748 02:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:58.748 02:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:58.748 02:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:58.748 02:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:00.649 02:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:00.649 00:38:00.649 real 0m9.430s 00:38:00.649 user 0m11.710s 00:38:00.649 sys 0m3.242s 00:38:00.649 02:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:00.649 02:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:00.649 ************************************ 00:38:00.649 END TEST nvmf_abort 00:38:00.649 ************************************ 00:38:00.649 02:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:00.649 02:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:00.649 02:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:00.649 02:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:00.649 ************************************ 00:38:00.649 START TEST nvmf_ns_hotplug_stress 00:38:00.649 ************************************ 00:38:00.649 02:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:00.649 * Looking for test storage... 00:38:00.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:00.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.649 --rc genhtml_branch_coverage=1 00:38:00.649 --rc genhtml_function_coverage=1 00:38:00.649 --rc genhtml_legend=1 00:38:00.649 --rc geninfo_all_blocks=1 00:38:00.649 --rc geninfo_unexecuted_blocks=1 00:38:00.649 00:38:00.649 ' 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:00.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.649 --rc genhtml_branch_coverage=1 00:38:00.649 --rc genhtml_function_coverage=1 00:38:00.649 --rc genhtml_legend=1 00:38:00.649 --rc geninfo_all_blocks=1 00:38:00.649 --rc geninfo_unexecuted_blocks=1 00:38:00.649 00:38:00.649 ' 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:00.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.649 --rc genhtml_branch_coverage=1 00:38:00.649 --rc genhtml_function_coverage=1 00:38:00.649 --rc genhtml_legend=1 00:38:00.649 --rc geninfo_all_blocks=1 00:38:00.649 --rc geninfo_unexecuted_blocks=1 00:38:00.649 00:38:00.649 ' 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:00.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.649 --rc genhtml_branch_coverage=1 00:38:00.649 --rc genhtml_function_coverage=1 00:38:00.649 --rc genhtml_legend=1 00:38:00.649 --rc geninfo_all_blocks=1 00:38:00.649 --rc geninfo_unexecuted_blocks=1 00:38:00.649 00:38:00.649 ' 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:00.649 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:00.920 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:00.920 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:00.920 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:00.920 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:00.920 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:00.920 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:00.920 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:00.920 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:38:00.920 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:00.920 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:00.920 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:00.920 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.921 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.921 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.921 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:38:00.921 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.921 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:38:00.921 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:00.921 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:00.921 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:00.921 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:00.921 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:00.921 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:00.921 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:00.921 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:00.921 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:00.921 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:00.921 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:00.921 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:38:00.922 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:00.922 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:00.922 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:00.922 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:00.922 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:00.922 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:00.922 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:00.922 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:00.922 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:00.922 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:00.922 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:38:00.922 02:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:02.836 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:02.836 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:38:02.836 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:02.836 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:02.836 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:02.836 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:02.836 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:02.836 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:38:02.836 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:02.836 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:38:02.836 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:38:02.836 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:38:02.836 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:38:02.836 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:38:02.836 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:38:02.836 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:02.836 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:02.836 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:02.836 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:02.837 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:02.837 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:02.837 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:02.837 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:02.837 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:03.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:03.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:38:03.096 00:38:03.096 --- 10.0.0.2 ping statistics --- 00:38:03.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:03.096 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:03.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:03.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:38:03.096 00:38:03.096 --- 10.0.0.1 ping statistics --- 00:38:03.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:03.096 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3148127 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3148127 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3148127 ']' 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:03.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:03.096 02:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:03.096 [2024-11-17 02:58:11.421758] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:03.096 [2024-11-17 02:58:11.424341] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:38:03.096 [2024-11-17 02:58:11.424451] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:03.354 [2024-11-17 02:58:11.576604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:03.355 [2024-11-17 02:58:11.695171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:03.355 [2024-11-17 02:58:11.695249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:03.355 [2024-11-17 02:58:11.695272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:03.355 [2024-11-17 02:58:11.695290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:03.355 [2024-11-17 02:58:11.695309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:03.355 [2024-11-17 02:58:11.697588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:03.355 [2024-11-17 02:58:11.697629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:03.355 [2024-11-17 02:58:11.697638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:03.613 [2024-11-17 02:58:12.045649] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:03.613 [2024-11-17 02:58:12.046851] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:03.613 [2024-11-17 02:58:12.047637] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:03.613 [2024-11-17 02:58:12.047977] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:04.179 02:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:04.179 02:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:38:04.179 02:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:04.179 02:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:04.179 02:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:04.179 02:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:04.179 02:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:38:04.179 02:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:04.437 [2024-11-17 02:58:12.706636] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:04.437 02:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:04.694 02:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:04.952 [2024-11-17 02:58:13.271162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:04.952 02:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:05.210 02:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:38:05.468 Malloc0 00:38:05.468 02:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:05.726 Delay0 00:38:05.726 02:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:06.292 02:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:38:06.550 NULL1 00:38:06.550 02:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:38:06.808 02:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3148560 00:38:06.808 02:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:06.808 02:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:06.808 02:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:38:08.182 Read completed with error (sct=0, sc=11) 00:38:08.182 02:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:08.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:08.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:08.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:08.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:08.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:08.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:08.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:08.440 02:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:38:08.440 02:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:38:08.698 true 00:38:08.698 02:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:08.698 02:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:09.264 02:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:09.829 02:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:38:09.829 02:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:38:09.829 true 00:38:09.829 02:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:09.829 02:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:10.087 02:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:10.344 02:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:38:10.344 02:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:38:10.602 true 00:38:10.863 02:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:10.863 02:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:11.121 02:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:11.379 02:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:38:11.379 02:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:38:11.637 true 00:38:11.637 02:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:11.637 02:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:12.570 02:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:12.570 02:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:38:12.570 02:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:38:12.828 true 00:38:12.828 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:12.828 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:13.086 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:13.652 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:38:13.652 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:38:13.652 true 00:38:13.652 02:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:13.652 02:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:13.909 02:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:14.167 02:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:38:14.167 02:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:38:14.424 true 00:38:14.682 02:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:14.682 02:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:15.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:15.616 02:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:15.616 02:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:38:15.616 02:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:38:15.874 true 00:38:15.874 02:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:15.874 02:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:16.132 02:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:16.390 02:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:38:16.390 02:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:38:16.648 true 00:38:16.648 02:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:16.648 02:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:17.581 02:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:17.840 02:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:38:17.840 02:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:38:18.097 true 00:38:18.097 02:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:18.097 02:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:18.354 02:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:18.612 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:38:18.612 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:38:18.870 true 00:38:18.870 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:18.870 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:19.128 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:19.694 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:38:19.694 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:38:19.694 true 00:38:19.694 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:19.694 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:20.628 02:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:20.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:20.885 02:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:38:20.886 02:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:38:21.142 true 00:38:21.142 02:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:21.142 02:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:21.400 02:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:21.965 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:38:21.965 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:38:22.222 true 00:38:22.222 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:22.222 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:22.480 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:22.738 02:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:38:22.738 02:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:38:22.996 true 00:38:22.996 02:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:22.996 02:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:23.929 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:24.186 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:38:24.186 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:38:24.444 true 00:38:24.444 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:24.444 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:24.701 02:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:24.959 02:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:38:24.959 02:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:38:25.216 true 00:38:25.216 02:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:25.216 02:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:25.473 02:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:25.731 02:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:38:25.731 02:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:38:25.989 true 00:38:25.989 02:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:25.989 02:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:26.922 02:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:27.181 02:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:38:27.181 02:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:38:27.438 true 00:38:27.696 02:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:27.696 02:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:27.954 02:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:28.211 02:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:38:28.211 02:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:38:28.469 true 00:38:28.469 02:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:28.469 02:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:28.726 02:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:28.984 02:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:38:28.984 02:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:38:29.242 true 00:38:29.242 02:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:29.242 02:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:30.174 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:30.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:30.445 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:38:30.445 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:38:30.784 true 00:38:30.784 02:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:30.784 02:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:31.072 02:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:31.329 02:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:38:31.329 02:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:38:31.587 true 00:38:31.587 02:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:31.587 02:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:31.845 02:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:32.102 02:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:38:32.102 02:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:38:32.361 true 00:38:32.361 02:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:32.361 02:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:33.294 02:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:33.294 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:33.294 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:33.551 02:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:38:33.552 02:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:38:33.809 true 00:38:33.810 02:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:33.810 02:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:34.067 02:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:34.325 02:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:38:34.325 02:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:38:34.584 true 00:38:34.584 02:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:34.584 02:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:35.517 02:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:35.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:35.775 02:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:38:35.775 02:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:38:36.033 true 00:38:36.033 02:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:36.033 02:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:36.290 02:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:36.549 02:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:38:36.549 02:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:38:36.807 true 00:38:36.807 02:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:36.807 02:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:37.741 Initializing NVMe Controllers 00:38:37.741 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:37.741 Controller IO queue size 128, less than required. 00:38:37.741 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:37.741 Controller IO queue size 128, less than required. 00:38:37.741 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:37.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:37.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:37.741 Initialization complete. Launching workers. 00:38:37.741 ======================================================== 00:38:37.741 Latency(us) 00:38:37.741 Device Information : IOPS MiB/s Average min max 00:38:37.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 496.07 0.24 115242.35 4183.62 1019038.97 00:38:37.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6982.13 3.41 18331.92 3542.68 483999.81 00:38:37.741 ======================================================== 00:38:37.741 Total : 7478.20 3.65 24760.47 3542.68 1019038.97 00:38:37.741 00:38:37.741 02:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:37.998 02:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:38:37.998 02:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:38:37.998 true 00:38:38.257 02:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148560 00:38:38.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3148560) - No such process 00:38:38.257 02:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3148560 00:38:38.257 02:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:38.514 02:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:38.773 02:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:38:38.773 02:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:38:38.773 02:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:38:38.773 02:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:38.773 02:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:38:39.031 null0 00:38:39.031 02:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:39.031 02:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:39.031 02:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:38:39.290 null1 00:38:39.290 02:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:39.290 02:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:39.290 02:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:38:39.549 null2 00:38:39.549 02:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:39.549 02:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:39.549 02:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:38:39.806 null3 00:38:39.807 02:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:39.807 02:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:39.807 02:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:38:40.065 null4 00:38:40.065 02:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:40.065 02:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:40.065 02:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:38:40.323 null5 00:38:40.323 02:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:40.323 02:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:40.323 02:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:38:40.581 null6 00:38:40.581 02:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:40.581 02:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:40.581 02:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:38:40.855 null7 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:40.856 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.857 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:40.857 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:40.857 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:40.857 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:40.857 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:38:40.857 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:38:40.857 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:40.857 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.857 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:40.857 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:40.857 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:38:40.857 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:40.857 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:40.857 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:38:40.857 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:40.857 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.857 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:40.857 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:40.857 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:38:40.857 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:40.857 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:38:40.857 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:40.858 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:40.858 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.858 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:40.858 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:40.858 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:38:40.858 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:40.858 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:40.858 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:38:40.858 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:40.858 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.858 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:40.858 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:40.858 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:38:40.858 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:40.858 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:38:40.858 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:40.858 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:40.858 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3152569 3152570 3152572 3152574 3152576 3152578 3152580 3152582 00:38:40.858 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.859 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:41.119 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:41.119 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:41.119 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:41.119 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:41.119 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:41.119 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:41.119 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:41.119 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:41.378 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:41.636 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:41.636 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:41.636 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:41.636 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:41.636 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:41.894 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:41.895 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:41.895 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.153 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:42.412 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:42.412 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:42.412 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:42.412 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:42.412 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:42.412 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:42.412 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:42.412 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:42.669 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.669 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.669 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:42.669 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.669 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.669 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:42.669 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.669 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.669 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:42.669 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.669 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.669 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:42.669 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.669 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.669 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:42.669 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.670 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.670 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:42.670 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.670 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.670 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:42.670 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.670 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.670 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:42.928 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:42.928 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:42.928 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:42.928 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:42.928 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:42.928 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:42.928 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:42.928 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.187 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:43.445 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:43.445 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:43.703 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:43.703 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:43.703 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:43.703 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:43.703 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:43.703 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:43.961 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.962 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:44.220 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:44.220 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:44.220 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:44.220 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:44.220 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:44.220 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:44.220 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:44.220 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:44.478 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.478 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.478 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:44.478 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.478 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.478 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:44.479 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.479 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.479 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:44.479 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.479 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.479 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:44.479 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.479 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.479 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:44.479 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.479 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.479 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:44.479 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.479 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.479 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.479 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:44.479 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.479 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:44.737 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:44.737 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:44.737 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:44.737 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:44.737 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:44.737 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:44.737 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:44.737 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:44.995 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.995 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.996 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:44.996 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.996 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.996 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:44.996 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.996 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.996 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:44.996 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.996 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.996 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:44.996 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.996 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.996 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:44.996 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.996 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.996 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:44.996 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.996 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.996 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:44.996 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:44.996 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:44.996 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:45.254 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:45.254 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:45.512 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:45.512 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:45.512 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:45.512 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:45.512 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:45.512 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.770 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:46.029 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:46.029 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:46.029 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:46.029 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:46.029 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:46.029 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:46.029 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:46.029 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.287 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:46.545 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:46.545 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:46.545 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:46.545 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:46.545 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:46.545 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:46.545 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:46.545 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:46.804 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.804 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.804 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.804 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.804 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.804 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.804 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.804 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.804 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.804 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.804 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.804 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.804 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.804 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.804 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.804 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.804 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:38:46.804 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:38:46.804 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:46.805 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:38:46.805 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:46.805 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:38:46.805 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:46.805 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:46.805 rmmod nvme_tcp 00:38:46.805 rmmod nvme_fabrics 00:38:46.805 rmmod nvme_keyring 00:38:47.063 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:47.063 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:38:47.063 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:38:47.063 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3148127 ']' 00:38:47.063 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3148127 00:38:47.063 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3148127 ']' 00:38:47.063 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3148127 00:38:47.063 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:38:47.063 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:47.063 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3148127 00:38:47.063 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:47.063 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:47.063 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3148127' 00:38:47.063 killing process with pid 3148127 00:38:47.063 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3148127 00:38:47.063 02:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3148127 00:38:47.997 02:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:47.997 02:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:47.997 02:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:47.997 02:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:38:47.997 02:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:38:47.997 02:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:47.997 02:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:38:47.997 02:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:47.997 02:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:47.997 02:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:47.997 02:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:47.997 02:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:50.530 00:38:50.530 real 0m49.522s 00:38:50.530 user 3m21.799s 00:38:50.530 sys 0m22.230s 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:50.530 ************************************ 00:38:50.530 END TEST nvmf_ns_hotplug_stress 00:38:50.530 ************************************ 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:50.530 ************************************ 00:38:50.530 START TEST nvmf_delete_subsystem 00:38:50.530 ************************************ 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:50.530 * Looking for test storage... 00:38:50.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:38:50.530 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:50.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.531 --rc genhtml_branch_coverage=1 00:38:50.531 --rc genhtml_function_coverage=1 00:38:50.531 --rc genhtml_legend=1 00:38:50.531 --rc geninfo_all_blocks=1 00:38:50.531 --rc geninfo_unexecuted_blocks=1 00:38:50.531 00:38:50.531 ' 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:50.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.531 --rc genhtml_branch_coverage=1 00:38:50.531 --rc genhtml_function_coverage=1 00:38:50.531 --rc genhtml_legend=1 00:38:50.531 --rc geninfo_all_blocks=1 00:38:50.531 --rc geninfo_unexecuted_blocks=1 00:38:50.531 00:38:50.531 ' 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:50.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.531 --rc genhtml_branch_coverage=1 00:38:50.531 --rc genhtml_function_coverage=1 00:38:50.531 --rc genhtml_legend=1 00:38:50.531 --rc geninfo_all_blocks=1 00:38:50.531 --rc geninfo_unexecuted_blocks=1 00:38:50.531 00:38:50.531 ' 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:50.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.531 --rc genhtml_branch_coverage=1 00:38:50.531 --rc genhtml_function_coverage=1 00:38:50.531 --rc genhtml_legend=1 00:38:50.531 --rc geninfo_all_blocks=1 00:38:50.531 --rc geninfo_unexecuted_blocks=1 00:38:50.531 00:38:50.531 ' 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:38:50.531 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:52.433 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:52.433 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:52.433 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:52.433 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:52.434 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:52.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:52.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:38:52.434 00:38:52.434 --- 10.0.0.2 ping statistics --- 00:38:52.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:52.434 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:52.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:52.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:38:52.434 00:38:52.434 --- 10.0.0.1 ping statistics --- 00:38:52.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:52.434 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3155564 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3155564 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3155564 ']' 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:52.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:52.434 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:52.434 [2024-11-17 02:59:00.844101] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:52.434 [2024-11-17 02:59:00.846626] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:38:52.434 [2024-11-17 02:59:00.846728] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:52.693 [2024-11-17 02:59:00.988610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:52.693 [2024-11-17 02:59:01.108599] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:52.693 [2024-11-17 02:59:01.108684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:52.693 [2024-11-17 02:59:01.108724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:52.693 [2024-11-17 02:59:01.108742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:52.693 [2024-11-17 02:59:01.108768] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:52.693 [2024-11-17 02:59:01.111138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:52.693 [2024-11-17 02:59:01.111160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:53.259 [2024-11-17 02:59:01.469147] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:53.259 [2024-11-17 02:59:01.469871] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:53.260 [2024-11-17 02:59:01.470224] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:53.518 [2024-11-17 02:59:01.840312] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:53.518 [2024-11-17 02:59:01.860791] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:53.518 NULL1 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:53.518 Delay0 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:53.518 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.519 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:53.519 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.519 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3155703 00:38:53.519 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:38:53.519 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:53.776 [2024-11-17 02:59:01.997489] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:55.675 02:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:55.675 02:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.675 02:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 starting I/O failed: -6 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Write completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 starting I/O failed: -6 00:38:55.934 starting I/O failed: -6 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.934 Read completed with error (sct=0, sc=8) 00:38:55.935 starting I/O failed: -6 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 starting I/O failed: -6 00:38:55.935 starting I/O failed: -6 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 starting I/O failed: -6 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 starting I/O failed: -6 00:38:55.935 Write completed with error (sct=0, sc=8) 00:38:55.935 starting I/O failed: -6 00:38:55.935 Write completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 starting I/O failed: -6 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Write completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 starting I/O failed: -6 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 [2024-11-17 02:59:04.259578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Write completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Write completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Write completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Write completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Write completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Write completed with error (sct=0, sc=8) 00:38:55.935 Write completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Write completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Write completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Write completed with error (sct=0, sc=8) 00:38:55.935 Write completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Write completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Write completed with error (sct=0, sc=8) 00:38:55.935 Write completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Write completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Write completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:55.935 Read completed with error (sct=0, sc=8) 00:38:56.870 [2024-11-17 02:59:05.223364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015c00 is same with the state(6) to be set 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Write completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Write completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Write completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Write completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Write completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Write completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Write completed with error (sct=0, sc=8) 00:38:56.870 Write completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Write completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 [2024-11-17 02:59:05.259870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:38:56.870 Write completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Write completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Write completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Write completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Write completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Write completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Write completed with error (sct=0, sc=8) 00:38:56.870 [2024-11-17 02:59:05.260569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Write completed with error (sct=0, sc=8) 00:38:56.870 Write completed with error (sct=0, sc=8) 00:38:56.870 Write completed with error (sct=0, sc=8) 00:38:56.870 Write completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.870 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Write completed with error (sct=0, sc=8) 00:38:56.871 Write completed with error (sct=0, sc=8) 00:38:56.871 Write completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Write completed with error (sct=0, sc=8) 00:38:56.871 Write completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 [2024-11-17 02:59:05.261250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Write completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Write completed with error (sct=0, sc=8) 00:38:56.871 Write completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Write completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Write completed with error (sct=0, sc=8) 00:38:56.871 Write completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Write completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Write completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Write completed with error (sct=0, sc=8) 00:38:56.871 Read completed with error (sct=0, sc=8) 00:38:56.871 Write completed with error (sct=0, sc=8) 00:38:56.871 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.871 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:38:56.871 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3155703 00:38:56.871 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:38:56.871 [2024-11-17 02:59:05.266132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:38:56.871 Initializing NVMe Controllers 00:38:56.871 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:56.871 Controller IO queue size 128, less than required. 00:38:56.871 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:56.871 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:56.871 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:56.871 Initialization complete. Launching workers. 00:38:56.871 ======================================================== 00:38:56.871 Latency(us) 00:38:56.871 Device Information : IOPS MiB/s Average min max 00:38:56.871 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.60 0.09 904184.35 998.41 1017309.92 00:38:56.871 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.21 0.09 884186.54 936.77 1018608.96 00:38:56.871 ======================================================== 00:38:56.871 Total : 363.81 0.18 894498.34 936.77 1018608.96 00:38:56.871 00:38:56.871 [2024-11-17 02:59:05.267815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015c00 (9): Bad file descriptor 00:38:56.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:38:57.438 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3155703 00:38:57.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3155703) - No such process 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3155703 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3155703 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3155703 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:57.439 [2024-11-17 02:59:05.784544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3156116 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156116 00:38:57.439 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:57.439 [2024-11-17 02:59:05.893601] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:58.004 02:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:58.004 02:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156116 00:38:58.004 02:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:58.569 02:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:58.569 02:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156116 00:38:58.569 02:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:59.206 02:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:59.206 02:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156116 00:38:59.206 02:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:59.495 02:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:59.495 02:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156116 00:38:59.495 02:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:00.059 02:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:00.059 02:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156116 00:39:00.059 02:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:00.625 02:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:00.625 02:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156116 00:39:00.625 02:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:00.625 Initializing NVMe Controllers 00:39:00.625 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:00.625 Controller IO queue size 128, less than required. 00:39:00.625 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:00.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:00.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:00.625 Initialization complete. Launching workers. 00:39:00.625 ======================================================== 00:39:00.625 Latency(us) 00:39:00.625 Device Information : IOPS MiB/s Average min max 00:39:00.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004636.46 1000339.02 1041293.62 00:39:00.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1007815.76 1000236.22 1043764.74 00:39:00.625 ======================================================== 00:39:00.625 Total : 256.00 0.12 1006226.11 1000236.22 1043764.74 00:39:00.625 00:39:00.883 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:00.883 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156116 00:39:00.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3156116) - No such process 00:39:00.883 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3156116 00:39:00.883 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:39:00.883 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:39:00.883 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:00.883 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:39:00.883 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:00.883 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:39:00.883 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:00.883 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:00.883 rmmod nvme_tcp 00:39:00.883 rmmod nvme_fabrics 00:39:01.141 rmmod nvme_keyring 00:39:01.141 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:01.141 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:39:01.141 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:39:01.141 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3155564 ']' 00:39:01.141 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3155564 00:39:01.141 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3155564 ']' 00:39:01.141 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3155564 00:39:01.141 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:39:01.141 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:01.141 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3155564 00:39:01.141 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:01.141 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:01.141 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3155564' 00:39:01.141 killing process with pid 3155564 00:39:01.141 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3155564 00:39:01.141 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3155564 00:39:02.517 02:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:02.517 02:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:02.517 02:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:02.517 02:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:39:02.517 02:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:39:02.517 02:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:02.517 02:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:39:02.517 02:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:02.517 02:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:02.517 02:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:02.517 02:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:02.517 02:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:04.421 00:39:04.421 real 0m14.114s 00:39:04.421 user 0m26.647s 00:39:04.421 sys 0m3.949s 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:04.421 ************************************ 00:39:04.421 END TEST nvmf_delete_subsystem 00:39:04.421 ************************************ 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:04.421 ************************************ 00:39:04.421 START TEST nvmf_host_management 00:39:04.421 ************************************ 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:04.421 * Looking for test storage... 00:39:04.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:04.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.421 --rc genhtml_branch_coverage=1 00:39:04.421 --rc genhtml_function_coverage=1 00:39:04.421 --rc genhtml_legend=1 00:39:04.421 --rc geninfo_all_blocks=1 00:39:04.421 --rc geninfo_unexecuted_blocks=1 00:39:04.421 00:39:04.421 ' 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:04.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.421 --rc genhtml_branch_coverage=1 00:39:04.421 --rc genhtml_function_coverage=1 00:39:04.421 --rc genhtml_legend=1 00:39:04.421 --rc geninfo_all_blocks=1 00:39:04.421 --rc geninfo_unexecuted_blocks=1 00:39:04.421 00:39:04.421 ' 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:04.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.421 --rc genhtml_branch_coverage=1 00:39:04.421 --rc genhtml_function_coverage=1 00:39:04.421 --rc genhtml_legend=1 00:39:04.421 --rc geninfo_all_blocks=1 00:39:04.421 --rc geninfo_unexecuted_blocks=1 00:39:04.421 00:39:04.421 ' 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:04.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.421 --rc genhtml_branch_coverage=1 00:39:04.421 --rc genhtml_function_coverage=1 00:39:04.421 --rc genhtml_legend=1 00:39:04.421 --rc geninfo_all_blocks=1 00:39:04.421 --rc geninfo_unexecuted_blocks=1 00:39:04.421 00:39:04.421 ' 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:04.421 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:39:04.422 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:06.955 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:06.955 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:39:06.955 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:06.955 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:06.955 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:06.955 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:06.955 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:06.955 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:39:06.955 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:06.955 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:39:06.955 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:39:06.955 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:39:06.955 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:39:06.955 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:39:06.955 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:06.956 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:06.956 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:06.956 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:06.956 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:06.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:06.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:39:06.956 00:39:06.956 --- 10.0.0.2 ping statistics --- 00:39:06.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:06.956 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:06.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:06.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:39:06.956 00:39:06.956 --- 10.0.0.1 ping statistics --- 00:39:06.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:06.956 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:39:06.956 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:06.957 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:06.957 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:06.957 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:06.957 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:06.957 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:06.957 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:06.957 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:39:06.957 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:39:06.957 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:39:06.957 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:06.957 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:06.957 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:06.957 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3158585 00:39:06.957 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3158585 00:39:06.957 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:39:06.957 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3158585 ']' 00:39:06.957 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:06.957 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:06.957 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:06.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:06.957 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:06.957 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:06.957 [2024-11-17 02:59:15.059742] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:06.957 [2024-11-17 02:59:15.062701] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:39:06.957 [2024-11-17 02:59:15.062813] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:06.957 [2024-11-17 02:59:15.229176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:06.957 [2024-11-17 02:59:15.372508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:06.957 [2024-11-17 02:59:15.372593] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:06.957 [2024-11-17 02:59:15.372621] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:06.957 [2024-11-17 02:59:15.372642] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:06.957 [2024-11-17 02:59:15.372664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:06.957 [2024-11-17 02:59:15.375560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:06.957 [2024-11-17 02:59:15.375675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:06.957 [2024-11-17 02:59:15.375713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:06.957 [2024-11-17 02:59:15.375727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:07.524 [2024-11-17 02:59:15.747185] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:07.524 [2024-11-17 02:59:15.756453] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:07.524 [2024-11-17 02:59:15.756745] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:07.524 [2024-11-17 02:59:15.757556] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:07.524 [2024-11-17 02:59:15.757899] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:07.524 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:07.524 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:39:07.524 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:07.524 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:07.524 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:07.783 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:07.783 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:07.783 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:07.783 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:07.783 [2024-11-17 02:59:16.004847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:07.783 Malloc0 00:39:07.783 [2024-11-17 02:59:16.133031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3158763 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3158763 /var/tmp/bdevperf.sock 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3158763 ']' 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:07.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:07.783 { 00:39:07.783 "params": { 00:39:07.783 "name": "Nvme$subsystem", 00:39:07.783 "trtype": "$TEST_TRANSPORT", 00:39:07.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:07.783 "adrfam": "ipv4", 00:39:07.783 "trsvcid": "$NVMF_PORT", 00:39:07.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:07.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:07.783 "hdgst": ${hdgst:-false}, 00:39:07.783 "ddgst": ${ddgst:-false} 00:39:07.783 }, 00:39:07.783 "method": "bdev_nvme_attach_controller" 00:39:07.783 } 00:39:07.783 EOF 00:39:07.783 )") 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:39:07.783 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:07.783 "params": { 00:39:07.783 "name": "Nvme0", 00:39:07.783 "trtype": "tcp", 00:39:07.783 "traddr": "10.0.0.2", 00:39:07.783 "adrfam": "ipv4", 00:39:07.783 "trsvcid": "4420", 00:39:07.783 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:07.783 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:07.783 "hdgst": false, 00:39:07.783 "ddgst": false 00:39:07.783 }, 00:39:07.783 "method": "bdev_nvme_attach_controller" 00:39:07.783 }' 00:39:08.041 [2024-11-17 02:59:16.246738] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:39:08.041 [2024-11-17 02:59:16.246863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3158763 ] 00:39:08.041 [2024-11-17 02:59:16.385541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:08.299 [2024-11-17 02:59:16.514969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:08.866 Running I/O for 10 seconds... 00:39:08.866 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:08.866 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:39:08.866 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:39:08.866 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.866 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:08.866 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.866 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:08.867 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:39:08.867 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:39:08.867 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:39:08.867 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:39:08.867 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:39:08.867 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:39:08.867 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:39:08.867 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:39:08.867 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:39:08.867 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.867 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:08.867 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.867 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=131 00:39:08.867 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 131 -ge 100 ']' 00:39:08.867 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:39:08.867 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:39:08.867 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:39:08.867 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:08.867 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.867 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:08.867 [2024-11-17 02:59:17.263630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:08.867 [2024-11-17 02:59:17.263698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.867 [2024-11-17 02:59:17.263726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:08.867 [2024-11-17 02:59:17.263749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.867 [2024-11-17 02:59:17.263771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:08.867 [2024-11-17 02:59:17.263792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.867 [2024-11-17 02:59:17.263814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:08.867 [2024-11-17 02:59:17.263835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.867 [2024-11-17 02:59:17.263855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:39:08.867 [2024-11-17 02:59:17.264354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.867 [2024-11-17 02:59:17.264415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.867 [2024-11-17 02:59:17.264475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.867 [2024-11-17 02:59:17.264499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.867 [2024-11-17 02:59:17.264539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.867 [2024-11-17 02:59:17.264578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.867 [2024-11-17 02:59:17.264604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.867 [2024-11-17 02:59:17.264626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.867 [2024-11-17 02:59:17.264650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.867 [2024-11-17 02:59:17.264672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.867 [2024-11-17 02:59:17.264702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.867 [2024-11-17 02:59:17.264724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.867 [2024-11-17 02:59:17.264747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.867 [2024-11-17 02:59:17.264768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.867 [2024-11-17 02:59:17.264792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.867 [2024-11-17 02:59:17.264814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.867 [2024-11-17 02:59:17.264837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.867 [2024-11-17 02:59:17.264859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.867 [2024-11-17 02:59:17.264883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.867 [2024-11-17 02:59:17.264904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.867 [2024-11-17 02:59:17.264928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.867 [2024-11-17 02:59:17.264950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.867 [2024-11-17 02:59:17.264973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.867 [2024-11-17 02:59:17.264995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.867 [2024-11-17 02:59:17.265018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.867 [2024-11-17 02:59:17.265039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.867 [2024-11-17 02:59:17.265067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.867 [2024-11-17 02:59:17.265093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.867 [2024-11-17 02:59:17.265140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.867 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.867 [2024-11-17 02:59:17.265163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.867 [2024-11-17 02:59:17.265190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.867 [2024-11-17 02:59:17.265212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.867 [2024-11-17 02:59:17.265235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.867 [2024-11-17 02:59:17.265257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.867 [2024-11-17 02:59:17.265280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.265302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:08.868 [2024-11-17 02:59:17.265325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.265347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.265369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.265399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.265423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.868 [2024-11-17 02:59:17.265444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.265470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.265492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.265515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.265537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:08.868 [2024-11-17 02:59:17.265560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.265582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.265610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.265633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.265656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.265678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.265701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.265722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.265746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.265767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.265791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.265813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.265836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.265858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.265884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.265906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.265929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.265951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.265974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.265996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.266020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.266041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.266065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.266087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.266120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.266153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.266176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.266202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.266227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.266249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.266273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.266294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.266318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.266340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.266364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.266395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.266419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.266440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.266464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.266485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.266511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.266534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.266557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.266579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.266603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.266625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.266650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.266672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.266696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.266718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.266741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.266763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.266791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.266813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.266837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.266860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.266884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.266906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.266929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.266950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.266974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.266997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.868 [2024-11-17 02:59:17.267036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.868 [2024-11-17 02:59:17.267059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.869 [2024-11-17 02:59:17.267105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.869 [2024-11-17 02:59:17.267141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.869 [2024-11-17 02:59:17.267167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.869 [2024-11-17 02:59:17.267189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.869 [2024-11-17 02:59:17.267214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.869 [2024-11-17 02:59:17.267235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.869 [2024-11-17 02:59:17.267259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.869 [2024-11-17 02:59:17.267281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.869 [2024-11-17 02:59:17.267304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.869 [2024-11-17 02:59:17.267326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.869 [2024-11-17 02:59:17.267350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.869 [2024-11-17 02:59:17.267371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.869 [2024-11-17 02:59:17.267401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.869 [2024-11-17 02:59:17.267446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.869 [2024-11-17 02:59:17.267473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.869 [2024-11-17 02:59:17.267495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.869 [2024-11-17 02:59:17.267526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.869 [2024-11-17 02:59:17.267547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:08.869 [2024-11-17 02:59:17.267620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:39:08.869 [2024-11-17 02:59:17.269177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:39:08.869 task offset: 24576 on job bdev=Nvme0n1 fails 00:39:08.869 00:39:08.869 Latency(us) 00:39:08.869 [2024-11-17T01:59:17.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:08.869 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:08.869 Job: Nvme0n1 ended in about 0.17 seconds with error 00:39:08.869 Verification LBA range: start 0x0 length 0x400 00:39:08.869 Nvme0n1 : 0.17 1136.69 71.04 378.90 0.00 39591.59 4733.16 41554.68 00:39:08.869 [2024-11-17T01:59:17.329Z] =================================================================================================================== 00:39:08.869 [2024-11-17T01:59:17.329Z] Total : 1136.69 71.04 378.90 0.00 39591.59 4733.16 41554.68 00:39:08.869 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.869 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:39:08.869 [2024-11-17 02:59:17.274032] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:08.869 [2024-11-17 02:59:17.274091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:39:08.869 [2024-11-17 02:59:17.280292] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:39:10.244 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3158763 00:39:10.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3158763) - No such process 00:39:10.244 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:39:10.244 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:39:10.244 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:39:10.244 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:39:10.244 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:39:10.244 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:39:10.244 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:10.244 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:10.244 { 00:39:10.244 "params": { 00:39:10.244 "name": "Nvme$subsystem", 00:39:10.244 "trtype": "$TEST_TRANSPORT", 00:39:10.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:10.244 "adrfam": "ipv4", 00:39:10.244 "trsvcid": "$NVMF_PORT", 00:39:10.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:10.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:10.244 "hdgst": ${hdgst:-false}, 00:39:10.244 "ddgst": ${ddgst:-false} 00:39:10.244 }, 00:39:10.244 "method": "bdev_nvme_attach_controller" 00:39:10.244 } 00:39:10.244 EOF 00:39:10.244 )") 00:39:10.244 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:39:10.244 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:39:10.244 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:39:10.244 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:10.244 "params": { 00:39:10.244 "name": "Nvme0", 00:39:10.244 "trtype": "tcp", 00:39:10.244 "traddr": "10.0.0.2", 00:39:10.244 "adrfam": "ipv4", 00:39:10.244 "trsvcid": "4420", 00:39:10.244 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:10.244 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:10.244 "hdgst": false, 00:39:10.244 "ddgst": false 00:39:10.244 }, 00:39:10.244 "method": "bdev_nvme_attach_controller" 00:39:10.244 }' 00:39:10.244 [2024-11-17 02:59:18.360020] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:39:10.244 [2024-11-17 02:59:18.360166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159040 ] 00:39:10.244 [2024-11-17 02:59:18.497228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:10.244 [2024-11-17 02:59:18.624787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:10.820 Running I/O for 1 seconds... 00:39:11.753 1344.00 IOPS, 84.00 MiB/s 00:39:11.753 Latency(us) 00:39:11.753 [2024-11-17T01:59:20.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:11.753 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:11.753 Verification LBA range: start 0x0 length 0x400 00:39:11.753 Nvme0n1 : 1.02 1386.79 86.67 0.00 0.00 45359.15 7621.59 40001.23 00:39:11.753 [2024-11-17T01:59:20.213Z] =================================================================================================================== 00:39:11.753 [2024-11-17T01:59:20.213Z] Total : 1386.79 86.67 0.00 0.00 45359.15 7621.59 40001.23 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:12.687 rmmod nvme_tcp 00:39:12.687 rmmod nvme_fabrics 00:39:12.687 rmmod nvme_keyring 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3158585 ']' 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3158585 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3158585 ']' 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3158585 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3158585 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3158585' 00:39:12.687 killing process with pid 3158585 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3158585 00:39:12.687 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3158585 00:39:14.062 [2024-11-17 02:59:22.246282] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:39:14.063 02:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:14.063 02:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:14.063 02:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:14.063 02:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:39:14.063 02:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:39:14.063 02:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:14.063 02:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:39:14.063 02:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:14.063 02:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:14.063 02:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:14.063 02:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:14.063 02:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:15.969 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:15.969 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:39:15.969 00:39:15.969 real 0m11.691s 00:39:15.969 user 0m25.243s 00:39:15.969 sys 0m4.526s 00:39:15.969 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:15.969 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:15.969 ************************************ 00:39:15.969 END TEST nvmf_host_management 00:39:15.969 ************************************ 00:39:15.969 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:15.969 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:15.969 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:15.969 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:16.228 ************************************ 00:39:16.228 START TEST nvmf_lvol 00:39:16.228 ************************************ 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:16.228 * Looking for test storage... 00:39:16.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:16.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.228 --rc genhtml_branch_coverage=1 00:39:16.228 --rc genhtml_function_coverage=1 00:39:16.228 --rc genhtml_legend=1 00:39:16.228 --rc geninfo_all_blocks=1 00:39:16.228 --rc geninfo_unexecuted_blocks=1 00:39:16.228 00:39:16.228 ' 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:16.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.228 --rc genhtml_branch_coverage=1 00:39:16.228 --rc genhtml_function_coverage=1 00:39:16.228 --rc genhtml_legend=1 00:39:16.228 --rc geninfo_all_blocks=1 00:39:16.228 --rc geninfo_unexecuted_blocks=1 00:39:16.228 00:39:16.228 ' 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:16.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.228 --rc genhtml_branch_coverage=1 00:39:16.228 --rc genhtml_function_coverage=1 00:39:16.228 --rc genhtml_legend=1 00:39:16.228 --rc geninfo_all_blocks=1 00:39:16.228 --rc geninfo_unexecuted_blocks=1 00:39:16.228 00:39:16.228 ' 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:16.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.228 --rc genhtml_branch_coverage=1 00:39:16.228 --rc genhtml_function_coverage=1 00:39:16.228 --rc genhtml_legend=1 00:39:16.228 --rc geninfo_all_blocks=1 00:39:16.228 --rc geninfo_unexecuted_blocks=1 00:39:16.228 00:39:16.228 ' 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:16.228 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:39:16.229 02:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:18.143 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:18.143 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:18.143 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:18.143 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:18.143 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:18.402 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:18.402 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:18.402 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:18.402 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:18.402 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:18.402 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:18.402 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:18.402 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:18.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:18.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:39:18.402 00:39:18.402 --- 10.0.0.2 ping statistics --- 00:39:18.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:18.402 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:39:18.402 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:18.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:18.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:39:18.402 00:39:18.402 --- 10.0.0.1 ping statistics --- 00:39:18.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:18.402 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:39:18.403 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:18.403 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:39:18.403 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:18.403 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:18.403 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:18.403 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:18.403 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:18.403 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:18.403 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:18.403 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:39:18.403 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:18.403 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:18.403 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:18.403 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3161378 00:39:18.403 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:39:18.403 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3161378 00:39:18.403 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3161378 ']' 00:39:18.403 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:18.403 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:18.403 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:18.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:18.403 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:18.403 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:18.403 [2024-11-17 02:59:26.826873] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:18.403 [2024-11-17 02:59:26.829415] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:39:18.403 [2024-11-17 02:59:26.829598] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:18.661 [2024-11-17 02:59:26.983601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:18.919 [2024-11-17 02:59:27.126762] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:18.919 [2024-11-17 02:59:27.126832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:18.919 [2024-11-17 02:59:27.126861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:18.919 [2024-11-17 02:59:27.126883] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:18.919 [2024-11-17 02:59:27.126905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:18.919 [2024-11-17 02:59:27.129572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:18.919 [2024-11-17 02:59:27.129647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:18.919 [2024-11-17 02:59:27.129655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:19.178 [2024-11-17 02:59:27.495008] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:19.178 [2024-11-17 02:59:27.496094] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:19.178 [2024-11-17 02:59:27.496892] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:19.178 [2024-11-17 02:59:27.497247] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:19.435 02:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:19.435 02:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:39:19.435 02:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:19.435 02:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:19.435 02:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:19.435 02:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:19.435 02:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:19.693 [2024-11-17 02:59:28.046730] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:19.693 02:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:20.259 02:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:39:20.259 02:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:20.518 02:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:39:20.518 02:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:39:20.776 02:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:39:21.034 02:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=be162c0f-bb43-4c84-a162-8a6776c8375e 00:39:21.034 02:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u be162c0f-bb43-4c84-a162-8a6776c8375e lvol 20 00:39:21.291 02:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3b94f4c0-d421-49ef-b394-70bc7927ae32 00:39:21.291 02:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:21.549 02:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3b94f4c0-d421-49ef-b394-70bc7927ae32 00:39:21.806 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:22.064 [2024-11-17 02:59:30.454938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:22.064 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:22.321 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3161930 00:39:22.321 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:39:22.321 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:39:23.696 02:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3b94f4c0-d421-49ef-b394-70bc7927ae32 MY_SNAPSHOT 00:39:23.696 02:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a069e337-bd23-40d5-b617-506c6413ee69 00:39:23.696 02:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3b94f4c0-d421-49ef-b394-70bc7927ae32 30 00:39:23.954 02:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a069e337-bd23-40d5-b617-506c6413ee69 MY_CLONE 00:39:24.520 02:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=48f99122-7b82-446a-93cd-552f6ef71dea 00:39:24.520 02:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 48f99122-7b82-446a-93cd-552f6ef71dea 00:39:25.087 02:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3161930 00:39:33.256 Initializing NVMe Controllers 00:39:33.256 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:33.256 Controller IO queue size 128, less than required. 00:39:33.256 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:33.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:39:33.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:39:33.256 Initialization complete. Launching workers. 00:39:33.256 ======================================================== 00:39:33.256 Latency(us) 00:39:33.256 Device Information : IOPS MiB/s Average min max 00:39:33.256 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8080.80 31.57 15853.51 516.45 201101.77 00:39:33.256 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8131.80 31.76 15748.36 4510.37 149700.23 00:39:33.256 ======================================================== 00:39:33.256 Total : 16212.60 63.33 15800.77 516.45 201101.77 00:39:33.256 00:39:33.256 02:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:33.256 02:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3b94f4c0-d421-49ef-b394-70bc7927ae32 00:39:33.514 02:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u be162c0f-bb43-4c84-a162-8a6776c8375e 00:39:33.772 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:39:33.772 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:39:33.772 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:39:33.772 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:33.772 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:39:33.772 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:33.772 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:39:33.772 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:33.772 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:33.772 rmmod nvme_tcp 00:39:33.772 rmmod nvme_fabrics 00:39:33.772 rmmod nvme_keyring 00:39:34.031 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:34.031 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:39:34.031 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:39:34.031 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3161378 ']' 00:39:34.031 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3161378 00:39:34.031 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3161378 ']' 00:39:34.031 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3161378 00:39:34.031 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:39:34.031 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:34.031 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3161378 00:39:34.031 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:34.031 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:34.031 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3161378' 00:39:34.031 killing process with pid 3161378 00:39:34.031 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3161378 00:39:34.031 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3161378 00:39:35.406 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:35.406 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:35.406 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:35.406 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:39:35.406 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:39:35.406 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:35.406 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:39:35.406 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:35.406 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:35.406 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:35.406 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:35.406 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.310 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:37.310 00:39:37.310 real 0m21.294s 00:39:37.310 user 0m59.217s 00:39:37.310 sys 0m7.603s 00:39:37.310 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:37.310 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:37.310 ************************************ 00:39:37.310 END TEST nvmf_lvol 00:39:37.310 ************************************ 00:39:37.310 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:37.310 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:37.310 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:37.310 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:37.569 ************************************ 00:39:37.569 START TEST nvmf_lvs_grow 00:39:37.569 ************************************ 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:37.569 * Looking for test storage... 00:39:37.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:37.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.569 --rc genhtml_branch_coverage=1 00:39:37.569 --rc genhtml_function_coverage=1 00:39:37.569 --rc genhtml_legend=1 00:39:37.569 --rc geninfo_all_blocks=1 00:39:37.569 --rc geninfo_unexecuted_blocks=1 00:39:37.569 00:39:37.569 ' 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:37.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.569 --rc genhtml_branch_coverage=1 00:39:37.569 --rc genhtml_function_coverage=1 00:39:37.569 --rc genhtml_legend=1 00:39:37.569 --rc geninfo_all_blocks=1 00:39:37.569 --rc geninfo_unexecuted_blocks=1 00:39:37.569 00:39:37.569 ' 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:37.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.569 --rc genhtml_branch_coverage=1 00:39:37.569 --rc genhtml_function_coverage=1 00:39:37.569 --rc genhtml_legend=1 00:39:37.569 --rc geninfo_all_blocks=1 00:39:37.569 --rc geninfo_unexecuted_blocks=1 00:39:37.569 00:39:37.569 ' 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:37.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.569 --rc genhtml_branch_coverage=1 00:39:37.569 --rc genhtml_function_coverage=1 00:39:37.569 --rc genhtml_legend=1 00:39:37.569 --rc geninfo_all_blocks=1 00:39:37.569 --rc geninfo_unexecuted_blocks=1 00:39:37.569 00:39:37.569 ' 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:39:37.569 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:39:37.570 02:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:39.471 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:39.472 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:39.472 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:39.472 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:39.472 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:39.472 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:39.731 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:39.731 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:39.731 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:39.731 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:39.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:39.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:39:39.731 00:39:39.731 --- 10.0.0.2 ping statistics --- 00:39:39.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:39.731 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:39:39.731 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:39.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:39.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:39:39.731 00:39:39.731 --- 10.0.0.1 ping statistics --- 00:39:39.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:39.731 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:39:39.731 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:39.731 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:39:39.731 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:39.731 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:39.731 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:39.731 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:39.731 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:39.731 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:39.731 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:39.731 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:39:39.731 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:39.731 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:39.731 02:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:39.731 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3165330 00:39:39.731 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:39.731 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3165330 00:39:39.731 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3165330 ']' 00:39:39.731 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:39.731 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:39.731 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:39.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:39.731 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:39.731 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:39.731 [2024-11-17 02:59:48.088285] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:39.731 [2024-11-17 02:59:48.090834] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:39:39.731 [2024-11-17 02:59:48.090945] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:39.989 [2024-11-17 02:59:48.233991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.989 [2024-11-17 02:59:48.354646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:39.989 [2024-11-17 02:59:48.354718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:39.989 [2024-11-17 02:59:48.354757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:39.989 [2024-11-17 02:59:48.354775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:39.989 [2024-11-17 02:59:48.354793] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:39.989 [2024-11-17 02:59:48.356303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:40.247 [2024-11-17 02:59:48.697177] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:40.247 [2024-11-17 02:59:48.697599] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:40.813 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:40.813 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:39:40.813 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:40.813 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:40.813 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:40.813 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:40.813 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:41.071 [2024-11-17 02:59:49.305352] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:41.071 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:39:41.071 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:41.071 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:41.071 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:41.071 ************************************ 00:39:41.071 START TEST lvs_grow_clean 00:39:41.071 ************************************ 00:39:41.071 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:39:41.071 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:41.071 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:41.071 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:41.071 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:41.071 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:41.071 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:41.071 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:41.071 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:41.071 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:41.328 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:41.328 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:41.586 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4eeec00e-c9da-40d7-a357-bcc86693718d 00:39:41.586 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4eeec00e-c9da-40d7-a357-bcc86693718d 00:39:41.586 02:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:41.844 02:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:41.844 02:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:41.844 02:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4eeec00e-c9da-40d7-a357-bcc86693718d lvol 150 00:39:42.102 02:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=df5c5fc5-4cce-43b2-8b9f-896ed1f815e1 00:39:42.102 02:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:42.102 02:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:42.360 [2024-11-17 02:59:50.785184] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:42.360 [2024-11-17 02:59:50.785308] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:42.360 true 00:39:42.360 02:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4eeec00e-c9da-40d7-a357-bcc86693718d 00:39:42.360 02:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:42.618 02:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:42.618 02:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:43.185 02:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 df5c5fc5-4cce-43b2-8b9f-896ed1f815e1 00:39:43.185 02:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:43.444 [2024-11-17 02:59:51.869645] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:43.444 02:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:43.702 02:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3165898 00:39:43.702 02:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:43.702 02:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:43.960 02:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3165898 /var/tmp/bdevperf.sock 00:39:43.960 02:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3165898 ']' 00:39:43.960 02:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:43.960 02:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:43.960 02:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:43.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:43.960 02:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:43.960 02:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:43.960 [2024-11-17 02:59:52.242717] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:39:43.960 [2024-11-17 02:59:52.242856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3165898 ] 00:39:43.960 [2024-11-17 02:59:52.378680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:44.217 [2024-11-17 02:59:52.508655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:44.784 02:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:44.784 02:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:39:44.784 02:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:45.350 Nvme0n1 00:39:45.350 02:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:45.608 [ 00:39:45.608 { 00:39:45.608 "name": "Nvme0n1", 00:39:45.608 "aliases": [ 00:39:45.608 "df5c5fc5-4cce-43b2-8b9f-896ed1f815e1" 00:39:45.608 ], 00:39:45.608 "product_name": "NVMe disk", 00:39:45.608 "block_size": 4096, 00:39:45.608 "num_blocks": 38912, 00:39:45.608 "uuid": "df5c5fc5-4cce-43b2-8b9f-896ed1f815e1", 00:39:45.608 "numa_id": 0, 00:39:45.608 "assigned_rate_limits": { 00:39:45.608 "rw_ios_per_sec": 0, 00:39:45.608 "rw_mbytes_per_sec": 0, 00:39:45.608 "r_mbytes_per_sec": 0, 00:39:45.608 "w_mbytes_per_sec": 0 00:39:45.608 }, 00:39:45.608 "claimed": false, 00:39:45.608 "zoned": false, 00:39:45.608 "supported_io_types": { 00:39:45.608 "read": true, 00:39:45.608 "write": true, 00:39:45.608 "unmap": true, 00:39:45.608 "flush": true, 00:39:45.608 "reset": true, 00:39:45.608 "nvme_admin": true, 00:39:45.608 "nvme_io": true, 00:39:45.608 "nvme_io_md": false, 00:39:45.608 "write_zeroes": true, 00:39:45.608 "zcopy": false, 00:39:45.608 "get_zone_info": false, 00:39:45.608 "zone_management": false, 00:39:45.608 "zone_append": false, 00:39:45.608 "compare": true, 00:39:45.608 "compare_and_write": true, 00:39:45.608 "abort": true, 00:39:45.608 "seek_hole": false, 00:39:45.608 "seek_data": false, 00:39:45.608 "copy": true, 00:39:45.608 "nvme_iov_md": false 00:39:45.608 }, 00:39:45.608 "memory_domains": [ 00:39:45.608 { 00:39:45.608 "dma_device_id": "system", 00:39:45.608 "dma_device_type": 1 00:39:45.608 } 00:39:45.608 ], 00:39:45.608 "driver_specific": { 00:39:45.608 "nvme": [ 00:39:45.608 { 00:39:45.608 "trid": { 00:39:45.608 "trtype": "TCP", 00:39:45.608 "adrfam": "IPv4", 00:39:45.608 "traddr": "10.0.0.2", 00:39:45.608 "trsvcid": "4420", 00:39:45.608 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:45.608 }, 00:39:45.608 "ctrlr_data": { 00:39:45.608 "cntlid": 1, 00:39:45.608 "vendor_id": "0x8086", 00:39:45.608 "model_number": "SPDK bdev Controller", 00:39:45.608 "serial_number": "SPDK0", 00:39:45.608 "firmware_revision": "25.01", 00:39:45.608 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:45.608 "oacs": { 00:39:45.608 "security": 0, 00:39:45.608 "format": 0, 00:39:45.608 "firmware": 0, 00:39:45.608 "ns_manage": 0 00:39:45.608 }, 00:39:45.608 "multi_ctrlr": true, 00:39:45.608 "ana_reporting": false 00:39:45.608 }, 00:39:45.608 "vs": { 00:39:45.608 "nvme_version": "1.3" 00:39:45.608 }, 00:39:45.608 "ns_data": { 00:39:45.608 "id": 1, 00:39:45.608 "can_share": true 00:39:45.608 } 00:39:45.608 } 00:39:45.608 ], 00:39:45.608 "mp_policy": "active_passive" 00:39:45.608 } 00:39:45.608 } 00:39:45.608 ] 00:39:45.608 02:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3166038 00:39:45.608 02:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:45.608 02:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:45.608 Running I/O for 10 seconds... 00:39:46.543 Latency(us) 00:39:46.543 [2024-11-17T01:59:55.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:46.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:46.543 Nvme0n1 : 1.00 10287.00 40.18 0.00 0.00 0.00 0.00 0.00 00:39:46.543 [2024-11-17T01:59:55.003Z] =================================================================================================================== 00:39:46.543 [2024-11-17T01:59:55.003Z] Total : 10287.00 40.18 0.00 0.00 0.00 0.00 0.00 00:39:46.543 00:39:47.478 02:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4eeec00e-c9da-40d7-a357-bcc86693718d 00:39:47.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:47.737 Nvme0n1 : 2.00 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:39:47.737 [2024-11-17T01:59:56.197Z] =================================================================================================================== 00:39:47.737 [2024-11-17T01:59:56.197Z] Total : 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:39:47.737 00:39:47.737 true 00:39:47.737 02:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4eeec00e-c9da-40d7-a357-bcc86693718d 00:39:47.737 02:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:47.996 02:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:47.996 02:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:47.996 02:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3166038 00:39:48.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:48.562 Nvme0n1 : 3.00 10456.33 40.85 0.00 0.00 0.00 0.00 0.00 00:39:48.562 [2024-11-17T01:59:57.022Z] =================================================================================================================== 00:39:48.562 [2024-11-17T01:59:57.022Z] Total : 10456.33 40.85 0.00 0.00 0.00 0.00 0.00 00:39:48.562 00:39:49.937 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:49.937 Nvme0n1 : 4.00 10493.50 40.99 0.00 0.00 0.00 0.00 0.00 00:39:49.937 [2024-11-17T01:59:58.397Z] =================================================================================================================== 00:39:49.937 [2024-11-17T01:59:58.397Z] Total : 10493.50 40.99 0.00 0.00 0.00 0.00 0.00 00:39:49.937 00:39:50.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:50.872 Nvme0n1 : 5.00 10528.40 41.13 0.00 0.00 0.00 0.00 0.00 00:39:50.872 [2024-11-17T01:59:59.332Z] =================================================================================================================== 00:39:50.872 [2024-11-17T01:59:59.332Z] Total : 10528.40 41.13 0.00 0.00 0.00 0.00 0.00 00:39:50.872 00:39:51.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:51.806 Nvme0n1 : 6.00 10562.17 41.26 0.00 0.00 0.00 0.00 0.00 00:39:51.806 [2024-11-17T02:00:00.266Z] =================================================================================================================== 00:39:51.806 [2024-11-17T02:00:00.266Z] Total : 10562.17 41.26 0.00 0.00 0.00 0.00 0.00 00:39:51.806 00:39:52.739 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:52.739 Nvme0n1 : 7.00 10577.29 41.32 0.00 0.00 0.00 0.00 0.00 00:39:52.739 [2024-11-17T02:00:01.199Z] =================================================================================================================== 00:39:52.739 [2024-11-17T02:00:01.199Z] Total : 10577.29 41.32 0.00 0.00 0.00 0.00 0.00 00:39:52.739 00:39:53.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:53.673 Nvme0n1 : 8.00 10668.00 41.67 0.00 0.00 0.00 0.00 0.00 00:39:53.673 [2024-11-17T02:00:02.133Z] =================================================================================================================== 00:39:53.673 [2024-11-17T02:00:02.133Z] Total : 10668.00 41.67 0.00 0.00 0.00 0.00 0.00 00:39:53.673 00:39:54.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:54.608 Nvme0n1 : 9.00 10710.33 41.84 0.00 0.00 0.00 0.00 0.00 00:39:54.608 [2024-11-17T02:00:03.068Z] =================================================================================================================== 00:39:54.608 [2024-11-17T02:00:03.068Z] Total : 10710.33 41.84 0.00 0.00 0.00 0.00 0.00 00:39:54.608 00:39:55.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:55.983 Nvme0n1 : 10.00 10718.80 41.87 0.00 0.00 0.00 0.00 0.00 00:39:55.983 [2024-11-17T02:00:04.443Z] =================================================================================================================== 00:39:55.983 [2024-11-17T02:00:04.443Z] Total : 10718.80 41.87 0.00 0.00 0.00 0.00 0.00 00:39:55.983 00:39:55.983 00:39:55.983 Latency(us) 00:39:55.983 [2024-11-17T02:00:04.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:55.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:55.983 Nvme0n1 : 10.01 10724.12 41.89 0.00 0.00 11929.16 9854.67 26796.94 00:39:55.983 [2024-11-17T02:00:04.443Z] =================================================================================================================== 00:39:55.983 [2024-11-17T02:00:04.443Z] Total : 10724.12 41.89 0.00 0.00 11929.16 9854.67 26796.94 00:39:55.983 { 00:39:55.983 "results": [ 00:39:55.983 { 00:39:55.983 "job": "Nvme0n1", 00:39:55.983 "core_mask": "0x2", 00:39:55.983 "workload": "randwrite", 00:39:55.983 "status": "finished", 00:39:55.983 "queue_depth": 128, 00:39:55.983 "io_size": 4096, 00:39:55.983 "runtime": 10.006975, 00:39:55.983 "iops": 10724.11992635137, 00:39:55.983 "mibps": 41.89109346231004, 00:39:55.984 "io_failed": 0, 00:39:55.984 "io_timeout": 0, 00:39:55.984 "avg_latency_us": 11929.157099711065, 00:39:55.984 "min_latency_us": 9854.672592592593, 00:39:55.984 "max_latency_us": 26796.942222222224 00:39:55.984 } 00:39:55.984 ], 00:39:55.984 "core_count": 1 00:39:55.984 } 00:39:55.984 03:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3165898 00:39:55.984 03:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3165898 ']' 00:39:55.984 03:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3165898 00:39:55.984 03:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:39:55.984 03:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:55.984 03:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3165898 00:39:55.984 03:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:55.984 03:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:55.984 03:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3165898' 00:39:55.984 killing process with pid 3165898 00:39:55.984 03:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3165898 00:39:55.984 Received shutdown signal, test time was about 10.000000 seconds 00:39:55.984 00:39:55.984 Latency(us) 00:39:55.984 [2024-11-17T02:00:04.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:55.984 [2024-11-17T02:00:04.444Z] =================================================================================================================== 00:39:55.984 [2024-11-17T02:00:04.444Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:55.984 03:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3165898 00:39:56.550 03:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:56.808 03:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:57.376 03:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4eeec00e-c9da-40d7-a357-bcc86693718d 00:39:57.376 03:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:57.376 03:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:57.376 03:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:39:57.376 03:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:57.634 [2024-11-17 03:00:06.081281] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:57.901 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4eeec00e-c9da-40d7-a357-bcc86693718d 00:39:57.901 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:39:57.901 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4eeec00e-c9da-40d7-a357-bcc86693718d 00:39:57.901 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:57.901 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:57.901 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:57.901 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:57.901 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:57.901 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:57.901 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:57.901 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:57.901 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4eeec00e-c9da-40d7-a357-bcc86693718d 00:39:58.200 request: 00:39:58.200 { 00:39:58.200 "uuid": "4eeec00e-c9da-40d7-a357-bcc86693718d", 00:39:58.200 "method": "bdev_lvol_get_lvstores", 00:39:58.200 "req_id": 1 00:39:58.200 } 00:39:58.200 Got JSON-RPC error response 00:39:58.200 response: 00:39:58.200 { 00:39:58.200 "code": -19, 00:39:58.200 "message": "No such device" 00:39:58.200 } 00:39:58.200 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:39:58.200 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:58.200 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:58.200 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:58.200 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:58.200 aio_bdev 00:39:58.200 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev df5c5fc5-4cce-43b2-8b9f-896ed1f815e1 00:39:58.200 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=df5c5fc5-4cce-43b2-8b9f-896ed1f815e1 00:39:58.200 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:58.200 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:39:58.200 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:58.200 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:58.200 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:58.770 03:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b df5c5fc5-4cce-43b2-8b9f-896ed1f815e1 -t 2000 00:39:58.770 [ 00:39:58.770 { 00:39:58.770 "name": "df5c5fc5-4cce-43b2-8b9f-896ed1f815e1", 00:39:58.770 "aliases": [ 00:39:58.770 "lvs/lvol" 00:39:58.770 ], 00:39:58.770 "product_name": "Logical Volume", 00:39:58.770 "block_size": 4096, 00:39:58.770 "num_blocks": 38912, 00:39:58.770 "uuid": "df5c5fc5-4cce-43b2-8b9f-896ed1f815e1", 00:39:58.770 "assigned_rate_limits": { 00:39:58.770 "rw_ios_per_sec": 0, 00:39:58.770 "rw_mbytes_per_sec": 0, 00:39:58.770 "r_mbytes_per_sec": 0, 00:39:58.770 "w_mbytes_per_sec": 0 00:39:58.770 }, 00:39:58.770 "claimed": false, 00:39:58.770 "zoned": false, 00:39:58.770 "supported_io_types": { 00:39:58.770 "read": true, 00:39:58.770 "write": true, 00:39:58.770 "unmap": true, 00:39:58.770 "flush": false, 00:39:58.770 "reset": true, 00:39:58.770 "nvme_admin": false, 00:39:58.770 "nvme_io": false, 00:39:58.770 "nvme_io_md": false, 00:39:58.770 "write_zeroes": true, 00:39:58.770 "zcopy": false, 00:39:58.770 "get_zone_info": false, 00:39:58.770 "zone_management": false, 00:39:58.770 "zone_append": false, 00:39:58.770 "compare": false, 00:39:58.770 "compare_and_write": false, 00:39:58.770 "abort": false, 00:39:58.770 "seek_hole": true, 00:39:58.770 "seek_data": true, 00:39:58.770 "copy": false, 00:39:58.770 "nvme_iov_md": false 00:39:58.770 }, 00:39:58.770 "driver_specific": { 00:39:58.770 "lvol": { 00:39:58.770 "lvol_store_uuid": "4eeec00e-c9da-40d7-a357-bcc86693718d", 00:39:58.770 "base_bdev": "aio_bdev", 00:39:58.770 "thin_provision": false, 00:39:58.770 "num_allocated_clusters": 38, 00:39:58.770 "snapshot": false, 00:39:58.770 "clone": false, 00:39:58.770 "esnap_clone": false 00:39:58.770 } 00:39:58.770 } 00:39:58.770 } 00:39:58.770 ] 00:39:58.770 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:39:58.770 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4eeec00e-c9da-40d7-a357-bcc86693718d 00:39:58.770 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:59.028 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:59.028 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4eeec00e-c9da-40d7-a357-bcc86693718d 00:39:59.028 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:59.594 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:59.594 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete df5c5fc5-4cce-43b2-8b9f-896ed1f815e1 00:39:59.852 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4eeec00e-c9da-40d7-a357-bcc86693718d 00:40:00.110 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:00.369 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:00.369 00:40:00.369 real 0m19.282s 00:40:00.369 user 0m19.046s 00:40:00.369 sys 0m1.869s 00:40:00.369 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:00.369 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:00.369 ************************************ 00:40:00.369 END TEST lvs_grow_clean 00:40:00.369 ************************************ 00:40:00.369 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:40:00.369 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:00.369 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:00.369 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:00.369 ************************************ 00:40:00.369 START TEST lvs_grow_dirty 00:40:00.369 ************************************ 00:40:00.369 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:40:00.369 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:00.369 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:00.369 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:00.369 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:00.369 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:00.369 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:00.369 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:00.369 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:00.369 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:00.627 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:00.627 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:00.884 03:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a5dcfbf8-6863-4246-88e4-5a1cd28af055 00:40:00.884 03:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5dcfbf8-6863-4246-88e4-5a1cd28af055 00:40:00.884 03:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:01.141 03:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:01.141 03:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:01.141 03:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a5dcfbf8-6863-4246-88e4-5a1cd28af055 lvol 150 00:40:01.399 03:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8a24061a-2847-4d87-966b-e696eb2ec773 00:40:01.399 03:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:01.399 03:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:01.658 [2024-11-17 03:00:10.093202] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:01.658 [2024-11-17 03:00:10.093360] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:01.658 true 00:40:01.658 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5dcfbf8-6863-4246-88e4-5a1cd28af055 00:40:01.658 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:02.225 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:02.225 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:02.483 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8a24061a-2847-4d87-966b-e696eb2ec773 00:40:02.741 03:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:03.000 [2024-11-17 03:00:11.281648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:03.000 03:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:03.258 03:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3168805 00:40:03.259 03:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:03.259 03:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:03.259 03:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3168805 /var/tmp/bdevperf.sock 00:40:03.259 03:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3168805 ']' 00:40:03.259 03:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:03.259 03:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:03.259 03:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:03.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:03.259 03:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:03.259 03:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:03.259 [2024-11-17 03:00:11.658743] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:03.259 [2024-11-17 03:00:11.658873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3168805 ] 00:40:03.517 [2024-11-17 03:00:11.805733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:03.517 [2024-11-17 03:00:11.941838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:04.451 03:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:04.451 03:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:40:04.451 03:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:04.710 Nvme0n1 00:40:04.710 03:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:05.276 [ 00:40:05.276 { 00:40:05.276 "name": "Nvme0n1", 00:40:05.276 "aliases": [ 00:40:05.276 "8a24061a-2847-4d87-966b-e696eb2ec773" 00:40:05.276 ], 00:40:05.276 "product_name": "NVMe disk", 00:40:05.276 "block_size": 4096, 00:40:05.276 "num_blocks": 38912, 00:40:05.276 "uuid": "8a24061a-2847-4d87-966b-e696eb2ec773", 00:40:05.276 "numa_id": 0, 00:40:05.276 "assigned_rate_limits": { 00:40:05.276 "rw_ios_per_sec": 0, 00:40:05.276 "rw_mbytes_per_sec": 0, 00:40:05.276 "r_mbytes_per_sec": 0, 00:40:05.276 "w_mbytes_per_sec": 0 00:40:05.276 }, 00:40:05.276 "claimed": false, 00:40:05.276 "zoned": false, 00:40:05.276 "supported_io_types": { 00:40:05.276 "read": true, 00:40:05.276 "write": true, 00:40:05.276 "unmap": true, 00:40:05.276 "flush": true, 00:40:05.276 "reset": true, 00:40:05.276 "nvme_admin": true, 00:40:05.276 "nvme_io": true, 00:40:05.276 "nvme_io_md": false, 00:40:05.276 "write_zeroes": true, 00:40:05.276 "zcopy": false, 00:40:05.276 "get_zone_info": false, 00:40:05.276 "zone_management": false, 00:40:05.276 "zone_append": false, 00:40:05.276 "compare": true, 00:40:05.276 "compare_and_write": true, 00:40:05.276 "abort": true, 00:40:05.276 "seek_hole": false, 00:40:05.276 "seek_data": false, 00:40:05.276 "copy": true, 00:40:05.276 "nvme_iov_md": false 00:40:05.276 }, 00:40:05.276 "memory_domains": [ 00:40:05.276 { 00:40:05.276 "dma_device_id": "system", 00:40:05.276 "dma_device_type": 1 00:40:05.276 } 00:40:05.276 ], 00:40:05.276 "driver_specific": { 00:40:05.277 "nvme": [ 00:40:05.277 { 00:40:05.277 "trid": { 00:40:05.277 "trtype": "TCP", 00:40:05.277 "adrfam": "IPv4", 00:40:05.277 "traddr": "10.0.0.2", 00:40:05.277 "trsvcid": "4420", 00:40:05.277 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:05.277 }, 00:40:05.277 "ctrlr_data": { 00:40:05.277 "cntlid": 1, 00:40:05.277 "vendor_id": "0x8086", 00:40:05.277 "model_number": "SPDK bdev Controller", 00:40:05.277 "serial_number": "SPDK0", 00:40:05.277 "firmware_revision": "25.01", 00:40:05.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:05.277 "oacs": { 00:40:05.277 "security": 0, 00:40:05.277 "format": 0, 00:40:05.277 "firmware": 0, 00:40:05.277 "ns_manage": 0 00:40:05.277 }, 00:40:05.277 "multi_ctrlr": true, 00:40:05.277 "ana_reporting": false 00:40:05.277 }, 00:40:05.277 "vs": { 00:40:05.277 "nvme_version": "1.3" 00:40:05.277 }, 00:40:05.277 "ns_data": { 00:40:05.277 "id": 1, 00:40:05.277 "can_share": true 00:40:05.277 } 00:40:05.277 } 00:40:05.277 ], 00:40:05.277 "mp_policy": "active_passive" 00:40:05.277 } 00:40:05.277 } 00:40:05.277 ] 00:40:05.277 03:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3168952 00:40:05.277 03:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:05.277 03:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:05.277 Running I/O for 10 seconds... 00:40:06.218 Latency(us) 00:40:06.218 [2024-11-17T02:00:14.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:06.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:06.218 Nvme0n1 : 1.00 10351.00 40.43 0.00 0.00 0.00 0.00 0.00 00:40:06.218 [2024-11-17T02:00:14.678Z] =================================================================================================================== 00:40:06.218 [2024-11-17T02:00:14.678Z] Total : 10351.00 40.43 0.00 0.00 0.00 0.00 0.00 00:40:06.218 00:40:07.152 03:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a5dcfbf8-6863-4246-88e4-5a1cd28af055 00:40:07.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:07.152 Nvme0n1 : 2.00 10477.50 40.93 0.00 0.00 0.00 0.00 0.00 00:40:07.152 [2024-11-17T02:00:15.612Z] =================================================================================================================== 00:40:07.152 [2024-11-17T02:00:15.612Z] Total : 10477.50 40.93 0.00 0.00 0.00 0.00 0.00 00:40:07.152 00:40:07.410 true 00:40:07.410 03:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5dcfbf8-6863-4246-88e4-5a1cd28af055 00:40:07.410 03:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:07.668 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:07.668 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:07.668 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3168952 00:40:08.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:08.235 Nvme0n1 : 3.00 10498.67 41.01 0.00 0.00 0.00 0.00 0.00 00:40:08.235 [2024-11-17T02:00:16.695Z] =================================================================================================================== 00:40:08.235 [2024-11-17T02:00:16.695Z] Total : 10498.67 41.01 0.00 0.00 0.00 0.00 0.00 00:40:08.235 00:40:09.170 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:09.170 Nvme0n1 : 4.00 10572.75 41.30 0.00 0.00 0.00 0.00 0.00 00:40:09.170 [2024-11-17T02:00:17.630Z] =================================================================================================================== 00:40:09.170 [2024-11-17T02:00:17.630Z] Total : 10572.75 41.30 0.00 0.00 0.00 0.00 0.00 00:40:09.170 00:40:10.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:10.545 Nvme0n1 : 5.00 10591.80 41.37 0.00 0.00 0.00 0.00 0.00 00:40:10.545 [2024-11-17T02:00:19.005Z] =================================================================================================================== 00:40:10.545 [2024-11-17T02:00:19.005Z] Total : 10591.80 41.37 0.00 0.00 0.00 0.00 0.00 00:40:10.545 00:40:11.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:11.479 Nvme0n1 : 6.00 10625.67 41.51 0.00 0.00 0.00 0.00 0.00 00:40:11.479 [2024-11-17T02:00:19.939Z] =================================================================================================================== 00:40:11.479 [2024-11-17T02:00:19.939Z] Total : 10625.67 41.51 0.00 0.00 0.00 0.00 0.00 00:40:11.479 00:40:12.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:12.412 Nvme0n1 : 7.00 10631.71 41.53 0.00 0.00 0.00 0.00 0.00 00:40:12.412 [2024-11-17T02:00:20.872Z] =================================================================================================================== 00:40:12.412 [2024-11-17T02:00:20.872Z] Total : 10631.71 41.53 0.00 0.00 0.00 0.00 0.00 00:40:12.412 00:40:13.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:13.348 Nvme0n1 : 8.00 10652.12 41.61 0.00 0.00 0.00 0.00 0.00 00:40:13.348 [2024-11-17T02:00:21.808Z] =================================================================================================================== 00:40:13.348 [2024-11-17T02:00:21.808Z] Total : 10652.12 41.61 0.00 0.00 0.00 0.00 0.00 00:40:13.348 00:40:14.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:14.283 Nvme0n1 : 9.00 10668.00 41.67 0.00 0.00 0.00 0.00 0.00 00:40:14.283 [2024-11-17T02:00:22.743Z] =================================================================================================================== 00:40:14.283 [2024-11-17T02:00:22.743Z] Total : 10668.00 41.67 0.00 0.00 0.00 0.00 0.00 00:40:14.283 00:40:15.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:15.217 Nvme0n1 : 10.00 10680.70 41.72 0.00 0.00 0.00 0.00 0.00 00:40:15.217 [2024-11-17T02:00:23.677Z] =================================================================================================================== 00:40:15.217 [2024-11-17T02:00:23.677Z] Total : 10680.70 41.72 0.00 0.00 0.00 0.00 0.00 00:40:15.217 00:40:15.217 00:40:15.217 Latency(us) 00:40:15.217 [2024-11-17T02:00:23.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:15.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:15.217 Nvme0n1 : 10.01 10685.02 41.74 0.00 0.00 11972.22 10048.85 26214.40 00:40:15.217 [2024-11-17T02:00:23.677Z] =================================================================================================================== 00:40:15.217 [2024-11-17T02:00:23.677Z] Total : 10685.02 41.74 0.00 0.00 11972.22 10048.85 26214.40 00:40:15.217 { 00:40:15.217 "results": [ 00:40:15.217 { 00:40:15.217 "job": "Nvme0n1", 00:40:15.217 "core_mask": "0x2", 00:40:15.217 "workload": "randwrite", 00:40:15.217 "status": "finished", 00:40:15.217 "queue_depth": 128, 00:40:15.217 "io_size": 4096, 00:40:15.217 "runtime": 10.007939, 00:40:15.217 "iops": 10685.017164872808, 00:40:15.217 "mibps": 41.73834830028441, 00:40:15.217 "io_failed": 0, 00:40:15.217 "io_timeout": 0, 00:40:15.217 "avg_latency_us": 11972.217111329313, 00:40:15.217 "min_latency_us": 10048.853333333333, 00:40:15.217 "max_latency_us": 26214.4 00:40:15.217 } 00:40:15.217 ], 00:40:15.217 "core_count": 1 00:40:15.217 } 00:40:15.217 03:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3168805 00:40:15.217 03:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3168805 ']' 00:40:15.217 03:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3168805 00:40:15.217 03:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:40:15.218 03:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:15.218 03:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3168805 00:40:15.218 03:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:15.218 03:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:15.218 03:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3168805' 00:40:15.218 killing process with pid 3168805 00:40:15.218 03:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3168805 00:40:15.218 Received shutdown signal, test time was about 10.000000 seconds 00:40:15.218 00:40:15.218 Latency(us) 00:40:15.218 [2024-11-17T02:00:23.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:15.218 [2024-11-17T02:00:23.678Z] =================================================================================================================== 00:40:15.218 [2024-11-17T02:00:23.678Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:15.218 03:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3168805 00:40:16.152 03:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:16.717 03:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:16.976 03:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5dcfbf8-6863-4246-88e4-5a1cd28af055 00:40:16.976 03:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:17.234 03:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:17.234 03:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:40:17.234 03:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3165330 00:40:17.234 03:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3165330 00:40:17.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3165330 Killed "${NVMF_APP[@]}" "$@" 00:40:17.234 03:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:40:17.234 03:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:40:17.234 03:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:17.234 03:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:17.234 03:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:17.234 03:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3170395 00:40:17.234 03:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:17.234 03:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3170395 00:40:17.234 03:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3170395 ']' 00:40:17.234 03:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:17.234 03:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:17.234 03:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:17.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:17.234 03:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:17.234 03:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:17.234 [2024-11-17 03:00:25.665084] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:17.234 [2024-11-17 03:00:25.667749] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:17.234 [2024-11-17 03:00:25.667861] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:17.492 [2024-11-17 03:00:25.816569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:17.492 [2024-11-17 03:00:25.930802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:17.492 [2024-11-17 03:00:25.930883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:17.492 [2024-11-17 03:00:25.930907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:17.492 [2024-11-17 03:00:25.930925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:17.492 [2024-11-17 03:00:25.930943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:17.492 [2024-11-17 03:00:25.932464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:18.058 [2024-11-17 03:00:26.292506] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:18.058 [2024-11-17 03:00:26.292979] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:18.317 03:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:18.317 03:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:40:18.317 03:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:18.317 03:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:18.317 03:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:18.317 03:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:18.317 03:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:18.575 [2024-11-17 03:00:26.952346] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:40:18.575 [2024-11-17 03:00:26.952585] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:40:18.575 [2024-11-17 03:00:26.952659] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:40:18.575 03:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:40:18.575 03:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8a24061a-2847-4d87-966b-e696eb2ec773 00:40:18.575 03:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8a24061a-2847-4d87-966b-e696eb2ec773 00:40:18.575 03:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:18.575 03:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:40:18.575 03:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:18.575 03:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:18.575 03:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:18.834 03:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8a24061a-2847-4d87-966b-e696eb2ec773 -t 2000 00:40:19.092 [ 00:40:19.092 { 00:40:19.092 "name": "8a24061a-2847-4d87-966b-e696eb2ec773", 00:40:19.092 "aliases": [ 00:40:19.092 "lvs/lvol" 00:40:19.092 ], 00:40:19.092 "product_name": "Logical Volume", 00:40:19.092 "block_size": 4096, 00:40:19.092 "num_blocks": 38912, 00:40:19.092 "uuid": "8a24061a-2847-4d87-966b-e696eb2ec773", 00:40:19.092 "assigned_rate_limits": { 00:40:19.092 "rw_ios_per_sec": 0, 00:40:19.092 "rw_mbytes_per_sec": 0, 00:40:19.092 "r_mbytes_per_sec": 0, 00:40:19.092 "w_mbytes_per_sec": 0 00:40:19.092 }, 00:40:19.092 "claimed": false, 00:40:19.092 "zoned": false, 00:40:19.092 "supported_io_types": { 00:40:19.092 "read": true, 00:40:19.092 "write": true, 00:40:19.092 "unmap": true, 00:40:19.092 "flush": false, 00:40:19.092 "reset": true, 00:40:19.092 "nvme_admin": false, 00:40:19.092 "nvme_io": false, 00:40:19.092 "nvme_io_md": false, 00:40:19.092 "write_zeroes": true, 00:40:19.092 "zcopy": false, 00:40:19.092 "get_zone_info": false, 00:40:19.092 "zone_management": false, 00:40:19.092 "zone_append": false, 00:40:19.092 "compare": false, 00:40:19.092 "compare_and_write": false, 00:40:19.092 "abort": false, 00:40:19.092 "seek_hole": true, 00:40:19.092 "seek_data": true, 00:40:19.092 "copy": false, 00:40:19.092 "nvme_iov_md": false 00:40:19.092 }, 00:40:19.092 "driver_specific": { 00:40:19.092 "lvol": { 00:40:19.092 "lvol_store_uuid": "a5dcfbf8-6863-4246-88e4-5a1cd28af055", 00:40:19.092 "base_bdev": "aio_bdev", 00:40:19.092 "thin_provision": false, 00:40:19.092 "num_allocated_clusters": 38, 00:40:19.092 "snapshot": false, 00:40:19.092 "clone": false, 00:40:19.092 "esnap_clone": false 00:40:19.092 } 00:40:19.092 } 00:40:19.092 } 00:40:19.092 ] 00:40:19.092 03:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:40:19.092 03:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5dcfbf8-6863-4246-88e4-5a1cd28af055 00:40:19.092 03:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:40:19.351 03:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:40:19.351 03:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5dcfbf8-6863-4246-88e4-5a1cd28af055 00:40:19.351 03:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:40:19.916 03:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:40:19.916 03:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:20.175 [2024-11-17 03:00:28.421452] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:20.175 03:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5dcfbf8-6863-4246-88e4-5a1cd28af055 00:40:20.175 03:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:40:20.175 03:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5dcfbf8-6863-4246-88e4-5a1cd28af055 00:40:20.175 03:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:20.175 03:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:20.175 03:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:20.175 03:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:20.175 03:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:20.175 03:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:20.175 03:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:20.175 03:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:20.175 03:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5dcfbf8-6863-4246-88e4-5a1cd28af055 00:40:20.433 request: 00:40:20.433 { 00:40:20.433 "uuid": "a5dcfbf8-6863-4246-88e4-5a1cd28af055", 00:40:20.433 "method": "bdev_lvol_get_lvstores", 00:40:20.433 "req_id": 1 00:40:20.433 } 00:40:20.433 Got JSON-RPC error response 00:40:20.433 response: 00:40:20.433 { 00:40:20.433 "code": -19, 00:40:20.433 "message": "No such device" 00:40:20.433 } 00:40:20.433 03:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:40:20.433 03:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:20.433 03:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:20.433 03:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:20.433 03:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:20.692 aio_bdev 00:40:20.692 03:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8a24061a-2847-4d87-966b-e696eb2ec773 00:40:20.692 03:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8a24061a-2847-4d87-966b-e696eb2ec773 00:40:20.692 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:20.692 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:40:20.692 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:20.692 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:20.692 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:20.948 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8a24061a-2847-4d87-966b-e696eb2ec773 -t 2000 00:40:21.206 [ 00:40:21.206 { 00:40:21.206 "name": "8a24061a-2847-4d87-966b-e696eb2ec773", 00:40:21.206 "aliases": [ 00:40:21.206 "lvs/lvol" 00:40:21.206 ], 00:40:21.206 "product_name": "Logical Volume", 00:40:21.206 "block_size": 4096, 00:40:21.206 "num_blocks": 38912, 00:40:21.206 "uuid": "8a24061a-2847-4d87-966b-e696eb2ec773", 00:40:21.206 "assigned_rate_limits": { 00:40:21.206 "rw_ios_per_sec": 0, 00:40:21.206 "rw_mbytes_per_sec": 0, 00:40:21.206 "r_mbytes_per_sec": 0, 00:40:21.206 "w_mbytes_per_sec": 0 00:40:21.206 }, 00:40:21.206 "claimed": false, 00:40:21.206 "zoned": false, 00:40:21.206 "supported_io_types": { 00:40:21.206 "read": true, 00:40:21.206 "write": true, 00:40:21.206 "unmap": true, 00:40:21.206 "flush": false, 00:40:21.206 "reset": true, 00:40:21.206 "nvme_admin": false, 00:40:21.206 "nvme_io": false, 00:40:21.206 "nvme_io_md": false, 00:40:21.206 "write_zeroes": true, 00:40:21.206 "zcopy": false, 00:40:21.206 "get_zone_info": false, 00:40:21.206 "zone_management": false, 00:40:21.206 "zone_append": false, 00:40:21.206 "compare": false, 00:40:21.206 "compare_and_write": false, 00:40:21.206 "abort": false, 00:40:21.206 "seek_hole": true, 00:40:21.206 "seek_data": true, 00:40:21.206 "copy": false, 00:40:21.206 "nvme_iov_md": false 00:40:21.206 }, 00:40:21.206 "driver_specific": { 00:40:21.206 "lvol": { 00:40:21.206 "lvol_store_uuid": "a5dcfbf8-6863-4246-88e4-5a1cd28af055", 00:40:21.206 "base_bdev": "aio_bdev", 00:40:21.206 "thin_provision": false, 00:40:21.206 "num_allocated_clusters": 38, 00:40:21.206 "snapshot": false, 00:40:21.206 "clone": false, 00:40:21.206 "esnap_clone": false 00:40:21.206 } 00:40:21.206 } 00:40:21.206 } 00:40:21.206 ] 00:40:21.206 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:40:21.206 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5dcfbf8-6863-4246-88e4-5a1cd28af055 00:40:21.206 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:21.464 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:21.464 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5dcfbf8-6863-4246-88e4-5a1cd28af055 00:40:21.464 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:22.030 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:22.030 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8a24061a-2847-4d87-966b-e696eb2ec773 00:40:22.031 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a5dcfbf8-6863-4246-88e4-5a1cd28af055 00:40:22.599 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:22.599 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:22.599 00:40:22.599 real 0m22.374s 00:40:22.599 user 0m39.499s 00:40:22.599 sys 0m4.763s 00:40:22.599 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:22.599 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:22.599 ************************************ 00:40:22.599 END TEST lvs_grow_dirty 00:40:22.599 ************************************ 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:40:22.876 nvmf_trace.0 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:22.876 rmmod nvme_tcp 00:40:22.876 rmmod nvme_fabrics 00:40:22.876 rmmod nvme_keyring 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3170395 ']' 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3170395 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3170395 ']' 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3170395 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3170395 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3170395' 00:40:22.876 killing process with pid 3170395 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3170395 00:40:22.876 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3170395 00:40:24.256 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:24.256 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:24.256 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:24.256 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:40:24.256 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:40:24.256 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:24.256 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:40:24.256 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:24.256 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:24.256 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:24.256 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:24.256 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:26.161 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:26.161 00:40:26.161 real 0m48.652s 00:40:26.161 user 1m1.744s 00:40:26.161 sys 0m8.643s 00:40:26.161 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:26.162 ************************************ 00:40:26.162 END TEST nvmf_lvs_grow 00:40:26.162 ************************************ 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:26.162 ************************************ 00:40:26.162 START TEST nvmf_bdev_io_wait 00:40:26.162 ************************************ 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:26.162 * Looking for test storage... 00:40:26.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:26.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:26.162 --rc genhtml_branch_coverage=1 00:40:26.162 --rc genhtml_function_coverage=1 00:40:26.162 --rc genhtml_legend=1 00:40:26.162 --rc geninfo_all_blocks=1 00:40:26.162 --rc geninfo_unexecuted_blocks=1 00:40:26.162 00:40:26.162 ' 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:26.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:26.162 --rc genhtml_branch_coverage=1 00:40:26.162 --rc genhtml_function_coverage=1 00:40:26.162 --rc genhtml_legend=1 00:40:26.162 --rc geninfo_all_blocks=1 00:40:26.162 --rc geninfo_unexecuted_blocks=1 00:40:26.162 00:40:26.162 ' 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:26.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:26.162 --rc genhtml_branch_coverage=1 00:40:26.162 --rc genhtml_function_coverage=1 00:40:26.162 --rc genhtml_legend=1 00:40:26.162 --rc geninfo_all_blocks=1 00:40:26.162 --rc geninfo_unexecuted_blocks=1 00:40:26.162 00:40:26.162 ' 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:26.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:26.162 --rc genhtml_branch_coverage=1 00:40:26.162 --rc genhtml_function_coverage=1 00:40:26.162 --rc genhtml_legend=1 00:40:26.162 --rc geninfo_all_blocks=1 00:40:26.162 --rc geninfo_unexecuted_blocks=1 00:40:26.162 00:40:26.162 ' 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:26.162 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:26.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:40:26.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:40:26.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:26.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:26.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:26.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:26.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:26.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:26.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:26.422 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:26.422 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:26.422 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:26.422 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:26.422 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:26.422 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:40:26.422 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:26.422 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:26.422 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:26.422 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:26.422 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:26.422 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:26.422 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:26.422 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:26.422 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:26.422 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:26.422 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:40:26.422 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:28.325 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:28.325 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:28.325 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:28.326 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:28.326 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:28.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:28.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:40:28.326 00:40:28.326 --- 10.0.0.2 ping statistics --- 00:40:28.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:28.326 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:28.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:28.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:40:28.326 00:40:28.326 --- 10.0.0.1 ping statistics --- 00:40:28.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:28.326 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:28.326 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:28.585 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:40:28.585 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:28.585 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:28.585 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:28.585 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3173179 00:40:28.585 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:40:28.585 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3173179 00:40:28.585 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3173179 ']' 00:40:28.585 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:28.585 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:28.585 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:28.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:28.585 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:28.585 03:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:28.585 [2024-11-17 03:00:36.889926] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:28.585 [2024-11-17 03:00:36.892456] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:28.585 [2024-11-17 03:00:36.892567] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:28.585 [2024-11-17 03:00:37.043001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:28.844 [2024-11-17 03:00:37.185264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:28.844 [2024-11-17 03:00:37.185338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:28.844 [2024-11-17 03:00:37.185366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:28.844 [2024-11-17 03:00:37.185387] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:28.844 [2024-11-17 03:00:37.185408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:28.844 [2024-11-17 03:00:37.188144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:28.844 [2024-11-17 03:00:37.188178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:28.844 [2024-11-17 03:00:37.188249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:28.844 [2024-11-17 03:00:37.188272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:28.844 [2024-11-17 03:00:37.188963] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:29.412 03:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:29.412 03:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:40:29.412 03:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:29.412 03:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:29.412 03:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:29.412 03:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:29.412 03:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:40:29.412 03:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.412 03:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:29.671 03:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.671 03:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:40:29.671 03:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.671 03:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:29.671 [2024-11-17 03:00:38.123611] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:29.671 [2024-11-17 03:00:38.124749] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:29.671 [2024-11-17 03:00:38.126013] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:29.671 [2024-11-17 03:00:38.127116] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:29.671 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.671 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:29.671 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.671 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:29.930 [2024-11-17 03:00:38.133354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:29.931 Malloc0 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:29.931 [2024-11-17 03:00:38.261620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3173331 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3173333 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3173335 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:29.931 { 00:40:29.931 "params": { 00:40:29.931 "name": "Nvme$subsystem", 00:40:29.931 "trtype": "$TEST_TRANSPORT", 00:40:29.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:29.931 "adrfam": "ipv4", 00:40:29.931 "trsvcid": "$NVMF_PORT", 00:40:29.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:29.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:29.931 "hdgst": ${hdgst:-false}, 00:40:29.931 "ddgst": ${ddgst:-false} 00:40:29.931 }, 00:40:29.931 "method": "bdev_nvme_attach_controller" 00:40:29.931 } 00:40:29.931 EOF 00:40:29.931 )") 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3173337 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:29.931 { 00:40:29.931 "params": { 00:40:29.931 "name": "Nvme$subsystem", 00:40:29.931 "trtype": "$TEST_TRANSPORT", 00:40:29.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:29.931 "adrfam": "ipv4", 00:40:29.931 "trsvcid": "$NVMF_PORT", 00:40:29.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:29.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:29.931 "hdgst": ${hdgst:-false}, 00:40:29.931 "ddgst": ${ddgst:-false} 00:40:29.931 }, 00:40:29.931 "method": "bdev_nvme_attach_controller" 00:40:29.931 } 00:40:29.931 EOF 00:40:29.931 )") 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:29.931 { 00:40:29.931 "params": { 00:40:29.931 "name": "Nvme$subsystem", 00:40:29.931 "trtype": "$TEST_TRANSPORT", 00:40:29.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:29.931 "adrfam": "ipv4", 00:40:29.931 "trsvcid": "$NVMF_PORT", 00:40:29.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:29.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:29.931 "hdgst": ${hdgst:-false}, 00:40:29.931 "ddgst": ${ddgst:-false} 00:40:29.931 }, 00:40:29.931 "method": "bdev_nvme_attach_controller" 00:40:29.931 } 00:40:29.931 EOF 00:40:29.931 )") 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:29.931 { 00:40:29.931 "params": { 00:40:29.931 "name": "Nvme$subsystem", 00:40:29.931 "trtype": "$TEST_TRANSPORT", 00:40:29.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:29.931 "adrfam": "ipv4", 00:40:29.931 "trsvcid": "$NVMF_PORT", 00:40:29.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:29.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:29.931 "hdgst": ${hdgst:-false}, 00:40:29.931 "ddgst": ${ddgst:-false} 00:40:29.931 }, 00:40:29.931 "method": "bdev_nvme_attach_controller" 00:40:29.931 } 00:40:29.931 EOF 00:40:29.931 )") 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3173331 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:29.931 "params": { 00:40:29.931 "name": "Nvme1", 00:40:29.931 "trtype": "tcp", 00:40:29.931 "traddr": "10.0.0.2", 00:40:29.931 "adrfam": "ipv4", 00:40:29.931 "trsvcid": "4420", 00:40:29.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:29.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:29.931 "hdgst": false, 00:40:29.931 "ddgst": false 00:40:29.931 }, 00:40:29.931 "method": "bdev_nvme_attach_controller" 00:40:29.931 }' 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:29.931 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:29.931 "params": { 00:40:29.931 "name": "Nvme1", 00:40:29.931 "trtype": "tcp", 00:40:29.931 "traddr": "10.0.0.2", 00:40:29.931 "adrfam": "ipv4", 00:40:29.931 "trsvcid": "4420", 00:40:29.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:29.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:29.931 "hdgst": false, 00:40:29.931 "ddgst": false 00:40:29.932 }, 00:40:29.932 "method": "bdev_nvme_attach_controller" 00:40:29.932 }' 00:40:29.932 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:29.932 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:29.932 "params": { 00:40:29.932 "name": "Nvme1", 00:40:29.932 "trtype": "tcp", 00:40:29.932 "traddr": "10.0.0.2", 00:40:29.932 "adrfam": "ipv4", 00:40:29.932 "trsvcid": "4420", 00:40:29.932 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:29.932 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:29.932 "hdgst": false, 00:40:29.932 "ddgst": false 00:40:29.932 }, 00:40:29.932 "method": "bdev_nvme_attach_controller" 00:40:29.932 }' 00:40:29.932 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:29.932 03:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:29.932 "params": { 00:40:29.932 "name": "Nvme1", 00:40:29.932 "trtype": "tcp", 00:40:29.932 "traddr": "10.0.0.2", 00:40:29.932 "adrfam": "ipv4", 00:40:29.932 "trsvcid": "4420", 00:40:29.932 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:29.932 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:29.932 "hdgst": false, 00:40:29.932 "ddgst": false 00:40:29.932 }, 00:40:29.932 "method": "bdev_nvme_attach_controller" 00:40:29.932 }' 00:40:29.932 [2024-11-17 03:00:38.351200] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:29.932 [2024-11-17 03:00:38.351190] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:29.932 [2024-11-17 03:00:38.351190] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:29.932 [2024-11-17 03:00:38.351327] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-17 03:00:38.351327] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-17 03:00:38.351327] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:40:29.932 --proc-type=auto ] 00:40:29.932 --proc-type=auto ] 00:40:29.932 [2024-11-17 03:00:38.352017] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:29.932 [2024-11-17 03:00:38.352171] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:40:30.190 [2024-11-17 03:00:38.607420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:30.449 [2024-11-17 03:00:38.707608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:30.449 [2024-11-17 03:00:38.727963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:30.449 [2024-11-17 03:00:38.806204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:30.449 [2024-11-17 03:00:38.826850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:30.449 [2024-11-17 03:00:38.872838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:30.707 [2024-11-17 03:00:38.927544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:30.707 [2024-11-17 03:00:38.990695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:40:30.965 Running I/O for 1 seconds... 00:40:30.965 Running I/O for 1 seconds... 00:40:30.965 Running I/O for 1 seconds... 00:40:31.223 Running I/O for 1 seconds... 00:40:32.159 8284.00 IOPS, 32.36 MiB/s 00:40:32.159 Latency(us) 00:40:32.159 [2024-11-17T02:00:40.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:32.159 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:40:32.159 Nvme1n1 : 1.01 8345.13 32.60 0.00 0.00 15266.26 6747.78 20194.80 00:40:32.159 [2024-11-17T02:00:40.619Z] =================================================================================================================== 00:40:32.159 [2024-11-17T02:00:40.619Z] Total : 8345.13 32.60 0.00 0.00 15266.26 6747.78 20194.80 00:40:32.159 153320.00 IOPS, 598.91 MiB/s 00:40:32.159 Latency(us) 00:40:32.159 [2024-11-17T02:00:40.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:32.159 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:40:32.159 Nvme1n1 : 1.00 152997.53 597.65 0.00 0.00 832.33 388.36 2087.44 00:40:32.159 [2024-11-17T02:00:40.619Z] =================================================================================================================== 00:40:32.159 [2024-11-17T02:00:40.619Z] Total : 152997.53 597.65 0.00 0.00 832.33 388.36 2087.44 00:40:32.159 7239.00 IOPS, 28.28 MiB/s 00:40:32.159 Latency(us) 00:40:32.159 [2024-11-17T02:00:40.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:32.159 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:40:32.159 Nvme1n1 : 1.01 7275.47 28.42 0.00 0.00 17480.37 6213.78 24175.50 00:40:32.159 [2024-11-17T02:00:40.619Z] =================================================================================================================== 00:40:32.159 [2024-11-17T02:00:40.619Z] Total : 7275.47 28.42 0.00 0.00 17480.37 6213.78 24175.50 00:40:32.159 7206.00 IOPS, 28.15 MiB/s 00:40:32.159 Latency(us) 00:40:32.159 [2024-11-17T02:00:40.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:32.159 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:40:32.159 Nvme1n1 : 1.01 7286.21 28.46 0.00 0.00 17488.65 3640.89 27573.67 00:40:32.159 [2024-11-17T02:00:40.619Z] =================================================================================================================== 00:40:32.159 [2024-11-17T02:00:40.619Z] Total : 7286.21 28.46 0.00 0.00 17488.65 3640.89 27573.67 00:40:32.726 03:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3173333 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3173335 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3173337 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:32.726 rmmod nvme_tcp 00:40:32.726 rmmod nvme_fabrics 00:40:32.726 rmmod nvme_keyring 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3173179 ']' 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3173179 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3173179 ']' 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3173179 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:32.726 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3173179 00:40:32.984 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:32.984 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:32.984 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3173179' 00:40:32.984 killing process with pid 3173179 00:40:32.984 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3173179 00:40:32.984 03:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3173179 00:40:33.920 03:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:33.920 03:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:33.920 03:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:33.920 03:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:40:33.920 03:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:40:33.920 03:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:33.920 03:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:40:33.920 03:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:33.920 03:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:33.920 03:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:33.920 03:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:33.920 03:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:36.458 00:40:36.458 real 0m9.854s 00:40:36.458 user 0m21.934s 00:40:36.458 sys 0m4.848s 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:36.458 ************************************ 00:40:36.458 END TEST nvmf_bdev_io_wait 00:40:36.458 ************************************ 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:36.458 ************************************ 00:40:36.458 START TEST nvmf_queue_depth 00:40:36.458 ************************************ 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:36.458 * Looking for test storage... 00:40:36.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:36.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:36.458 --rc genhtml_branch_coverage=1 00:40:36.458 --rc genhtml_function_coverage=1 00:40:36.458 --rc genhtml_legend=1 00:40:36.458 --rc geninfo_all_blocks=1 00:40:36.458 --rc geninfo_unexecuted_blocks=1 00:40:36.458 00:40:36.458 ' 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:36.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:36.458 --rc genhtml_branch_coverage=1 00:40:36.458 --rc genhtml_function_coverage=1 00:40:36.458 --rc genhtml_legend=1 00:40:36.458 --rc geninfo_all_blocks=1 00:40:36.458 --rc geninfo_unexecuted_blocks=1 00:40:36.458 00:40:36.458 ' 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:36.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:36.458 --rc genhtml_branch_coverage=1 00:40:36.458 --rc genhtml_function_coverage=1 00:40:36.458 --rc genhtml_legend=1 00:40:36.458 --rc geninfo_all_blocks=1 00:40:36.458 --rc geninfo_unexecuted_blocks=1 00:40:36.458 00:40:36.458 ' 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:36.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:36.458 --rc genhtml_branch_coverage=1 00:40:36.458 --rc genhtml_function_coverage=1 00:40:36.458 --rc genhtml_legend=1 00:40:36.458 --rc geninfo_all_blocks=1 00:40:36.458 --rc geninfo_unexecuted_blocks=1 00:40:36.458 00:40:36.458 ' 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:36.458 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:40:36.459 03:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:38.362 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:38.362 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:40:38.362 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:38.363 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:38.363 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:38.363 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:38.363 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:38.363 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:38.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:38.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:40:38.364 00:40:38.364 --- 10.0.0.2 ping statistics --- 00:40:38.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:38.364 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:38.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:38.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:40:38.364 00:40:38.364 --- 10.0.0.1 ping statistics --- 00:40:38.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:38.364 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3175701 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3175701 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3175701 ']' 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:38.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:38.364 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:38.364 [2024-11-17 03:00:46.727611] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:38.364 [2024-11-17 03:00:46.730170] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:38.364 [2024-11-17 03:00:46.730272] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:38.622 [2024-11-17 03:00:46.878260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:38.622 [2024-11-17 03:00:47.017237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:38.622 [2024-11-17 03:00:47.017306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:38.622 [2024-11-17 03:00:47.017346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:38.622 [2024-11-17 03:00:47.017365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:38.622 [2024-11-17 03:00:47.017400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:38.622 [2024-11-17 03:00:47.019063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:39.187 [2024-11-17 03:00:47.392980] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:39.187 [2024-11-17 03:00:47.393449] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:39.446 [2024-11-17 03:00:47.776218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:39.446 Malloc0 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:39.446 [2024-11-17 03:00:47.896414] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3175850 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3175850 /var/tmp/bdevperf.sock 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3175850 ']' 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:39.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:39.446 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:39.705 [2024-11-17 03:00:47.988598] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:39.705 [2024-11-17 03:00:47.988737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3175850 ] 00:40:39.705 [2024-11-17 03:00:48.132861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:39.963 [2024-11-17 03:00:48.263671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:40.529 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:40.529 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:40:40.529 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:40.529 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.529 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:40.787 NVMe0n1 00:40:40.787 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.787 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:41.045 Running I/O for 10 seconds... 00:40:42.914 5485.00 IOPS, 21.43 MiB/s [2024-11-17T02:00:52.310Z] 5646.00 IOPS, 22.05 MiB/s [2024-11-17T02:00:53.686Z] 5809.00 IOPS, 22.69 MiB/s [2024-11-17T02:00:54.621Z] 5891.00 IOPS, 23.01 MiB/s [2024-11-17T02:00:55.557Z] 5945.40 IOPS, 23.22 MiB/s [2024-11-17T02:00:56.492Z] 5973.83 IOPS, 23.34 MiB/s [2024-11-17T02:00:57.427Z] 5999.57 IOPS, 23.44 MiB/s [2024-11-17T02:00:58.363Z] 6017.62 IOPS, 23.51 MiB/s [2024-11-17T02:00:59.738Z] 6031.67 IOPS, 23.56 MiB/s [2024-11-17T02:00:59.738Z] 6040.50 IOPS, 23.60 MiB/s 00:40:51.278 Latency(us) 00:40:51.278 [2024-11-17T02:00:59.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:51.278 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:40:51.278 Verification LBA range: start 0x0 length 0x4000 00:40:51.278 NVMe0n1 : 10.13 6062.56 23.68 0.00 0.00 168007.87 27767.85 98255.45 00:40:51.278 [2024-11-17T02:00:59.738Z] =================================================================================================================== 00:40:51.278 [2024-11-17T02:00:59.738Z] Total : 6062.56 23.68 0.00 0.00 168007.87 27767.85 98255.45 00:40:51.278 { 00:40:51.278 "results": [ 00:40:51.278 { 00:40:51.278 "job": "NVMe0n1", 00:40:51.278 "core_mask": "0x1", 00:40:51.278 "workload": "verify", 00:40:51.278 "status": "finished", 00:40:51.278 "verify_range": { 00:40:51.278 "start": 0, 00:40:51.278 "length": 16384 00:40:51.278 }, 00:40:51.278 "queue_depth": 1024, 00:40:51.278 "io_size": 4096, 00:40:51.278 "runtime": 10.132518, 00:40:51.278 "iops": 6062.560165202766, 00:40:51.278 "mibps": 23.681875645323306, 00:40:51.278 "io_failed": 0, 00:40:51.278 "io_timeout": 0, 00:40:51.278 "avg_latency_us": 168007.87232455655, 00:40:51.278 "min_latency_us": 27767.845925925925, 00:40:51.278 "max_latency_us": 98255.45481481482 00:40:51.278 } 00:40:51.278 ], 00:40:51.278 "core_count": 1 00:40:51.278 } 00:40:51.278 03:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3175850 00:40:51.278 03:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3175850 ']' 00:40:51.278 03:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3175850 00:40:51.278 03:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:40:51.278 03:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:51.278 03:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3175850 00:40:51.278 03:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:51.278 03:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:51.278 03:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3175850' 00:40:51.278 killing process with pid 3175850 00:40:51.278 03:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3175850 00:40:51.278 Received shutdown signal, test time was about 10.000000 seconds 00:40:51.278 00:40:51.278 Latency(us) 00:40:51.278 [2024-11-17T02:00:59.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:51.278 [2024-11-17T02:00:59.738Z] =================================================================================================================== 00:40:51.278 [2024-11-17T02:00:59.738Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:51.278 03:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3175850 00:40:52.243 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:40:52.244 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:40:52.244 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:52.244 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:40:52.244 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:52.244 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:40:52.244 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:52.244 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:52.244 rmmod nvme_tcp 00:40:52.244 rmmod nvme_fabrics 00:40:52.244 rmmod nvme_keyring 00:40:52.244 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:52.244 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:40:52.244 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:40:52.244 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3175701 ']' 00:40:52.244 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3175701 00:40:52.244 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3175701 ']' 00:40:52.244 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3175701 00:40:52.244 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:40:52.244 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:52.244 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3175701 00:40:52.244 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:52.244 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:52.244 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3175701' 00:40:52.244 killing process with pid 3175701 00:40:52.244 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3175701 00:40:52.244 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3175701 00:40:53.619 03:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:53.619 03:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:53.619 03:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:53.619 03:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:40:53.619 03:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:40:53.619 03:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:53.619 03:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:40:53.619 03:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:53.619 03:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:53.619 03:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:53.619 03:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:53.619 03:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:55.521 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:55.521 00:40:55.521 real 0m19.517s 00:40:55.521 user 0m27.060s 00:40:55.521 sys 0m3.776s 00:40:55.521 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:55.521 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:55.521 ************************************ 00:40:55.521 END TEST nvmf_queue_depth 00:40:55.521 ************************************ 00:40:55.521 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:55.521 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:55.521 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:55.521 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:55.521 ************************************ 00:40:55.521 START TEST nvmf_target_multipath 00:40:55.521 ************************************ 00:40:55.521 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:55.521 * Looking for test storage... 00:40:55.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:55.781 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:55.781 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:40:55.781 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:55.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.781 --rc genhtml_branch_coverage=1 00:40:55.781 --rc genhtml_function_coverage=1 00:40:55.781 --rc genhtml_legend=1 00:40:55.781 --rc geninfo_all_blocks=1 00:40:55.781 --rc geninfo_unexecuted_blocks=1 00:40:55.781 00:40:55.781 ' 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:55.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.781 --rc genhtml_branch_coverage=1 00:40:55.781 --rc genhtml_function_coverage=1 00:40:55.781 --rc genhtml_legend=1 00:40:55.781 --rc geninfo_all_blocks=1 00:40:55.781 --rc geninfo_unexecuted_blocks=1 00:40:55.781 00:40:55.781 ' 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:55.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.781 --rc genhtml_branch_coverage=1 00:40:55.781 --rc genhtml_function_coverage=1 00:40:55.781 --rc genhtml_legend=1 00:40:55.781 --rc geninfo_all_blocks=1 00:40:55.781 --rc geninfo_unexecuted_blocks=1 00:40:55.781 00:40:55.781 ' 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:55.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.781 --rc genhtml_branch_coverage=1 00:40:55.781 --rc genhtml_function_coverage=1 00:40:55.781 --rc genhtml_legend=1 00:40:55.781 --rc geninfo_all_blocks=1 00:40:55.781 --rc geninfo_unexecuted_blocks=1 00:40:55.781 00:40:55.781 ' 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.781 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:40:55.782 03:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:57.684 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:57.684 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:57.684 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:57.684 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:57.685 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:57.685 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:57.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:57.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:40:57.944 00:40:57.944 --- 10.0.0.2 ping statistics --- 00:40:57.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:57.944 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:57.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:57.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:40:57.944 00:40:57.944 --- 10.0.0.1 ping statistics --- 00:40:57.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:57.944 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:40:57.944 only one NIC for nvmf test 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:57.944 rmmod nvme_tcp 00:40:57.944 rmmod nvme_fabrics 00:40:57.944 rmmod nvme_keyring 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:57.944 03:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:00.478 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:00.479 00:41:00.479 real 0m4.424s 00:41:00.479 user 0m0.847s 00:41:00.479 sys 0m1.509s 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:00.479 ************************************ 00:41:00.479 END TEST nvmf_target_multipath 00:41:00.479 ************************************ 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:00.479 ************************************ 00:41:00.479 START TEST nvmf_zcopy 00:41:00.479 ************************************ 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:41:00.479 * Looking for test storage... 00:41:00.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:00.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.479 --rc genhtml_branch_coverage=1 00:41:00.479 --rc genhtml_function_coverage=1 00:41:00.479 --rc genhtml_legend=1 00:41:00.479 --rc geninfo_all_blocks=1 00:41:00.479 --rc geninfo_unexecuted_blocks=1 00:41:00.479 00:41:00.479 ' 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:00.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.479 --rc genhtml_branch_coverage=1 00:41:00.479 --rc genhtml_function_coverage=1 00:41:00.479 --rc genhtml_legend=1 00:41:00.479 --rc geninfo_all_blocks=1 00:41:00.479 --rc geninfo_unexecuted_blocks=1 00:41:00.479 00:41:00.479 ' 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:00.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.479 --rc genhtml_branch_coverage=1 00:41:00.479 --rc genhtml_function_coverage=1 00:41:00.479 --rc genhtml_legend=1 00:41:00.479 --rc geninfo_all_blocks=1 00:41:00.479 --rc geninfo_unexecuted_blocks=1 00:41:00.479 00:41:00.479 ' 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:00.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.479 --rc genhtml_branch_coverage=1 00:41:00.479 --rc genhtml_function_coverage=1 00:41:00.479 --rc genhtml_legend=1 00:41:00.479 --rc geninfo_all_blocks=1 00:41:00.479 --rc geninfo_unexecuted_blocks=1 00:41:00.479 00:41:00.479 ' 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:41:00.479 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:41:00.480 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:02.381 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:02.382 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:02.382 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:02.382 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:02.382 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:02.382 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:02.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:02.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:41:02.383 00:41:02.383 --- 10.0.0.2 ping statistics --- 00:41:02.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:02.383 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:02.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:02.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:41:02.383 00:41:02.383 --- 10.0.0.1 ping statistics --- 00:41:02.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:02.383 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3181288 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3181288 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3181288 ']' 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:02.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:02.383 03:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:02.383 [2024-11-17 03:01:10.795719] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:02.383 [2024-11-17 03:01:10.798174] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:41:02.383 [2024-11-17 03:01:10.798288] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:02.641 [2024-11-17 03:01:10.952448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:02.641 [2024-11-17 03:01:11.088701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:02.641 [2024-11-17 03:01:11.088783] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:02.641 [2024-11-17 03:01:11.088823] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:02.641 [2024-11-17 03:01:11.088845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:02.641 [2024-11-17 03:01:11.088866] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:02.641 [2024-11-17 03:01:11.090513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:03.208 [2024-11-17 03:01:11.453782] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:03.208 [2024-11-17 03:01:11.454253] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:03.467 [2024-11-17 03:01:11.783624] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:03.467 [2024-11-17 03:01:11.799799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:03.467 malloc0 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:41:03.467 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:41:03.468 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:41:03.468 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:03.468 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:03.468 { 00:41:03.468 "params": { 00:41:03.468 "name": "Nvme$subsystem", 00:41:03.468 "trtype": "$TEST_TRANSPORT", 00:41:03.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:03.468 "adrfam": "ipv4", 00:41:03.468 "trsvcid": "$NVMF_PORT", 00:41:03.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:03.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:03.468 "hdgst": ${hdgst:-false}, 00:41:03.468 "ddgst": ${ddgst:-false} 00:41:03.468 }, 00:41:03.468 "method": "bdev_nvme_attach_controller" 00:41:03.468 } 00:41:03.468 EOF 00:41:03.468 )") 00:41:03.468 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:41:03.468 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:41:03.468 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:41:03.468 03:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:03.468 "params": { 00:41:03.468 "name": "Nvme1", 00:41:03.468 "trtype": "tcp", 00:41:03.468 "traddr": "10.0.0.2", 00:41:03.468 "adrfam": "ipv4", 00:41:03.468 "trsvcid": "4420", 00:41:03.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:03.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:03.468 "hdgst": false, 00:41:03.468 "ddgst": false 00:41:03.468 }, 00:41:03.468 "method": "bdev_nvme_attach_controller" 00:41:03.468 }' 00:41:03.726 [2024-11-17 03:01:11.961045] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:41:03.726 [2024-11-17 03:01:11.961208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3181442 ] 00:41:03.726 [2024-11-17 03:01:12.121621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:03.984 [2024-11-17 03:01:12.261017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:04.551 Running I/O for 10 seconds... 00:41:06.423 3956.00 IOPS, 30.91 MiB/s [2024-11-17T02:01:15.819Z] 4058.00 IOPS, 31.70 MiB/s [2024-11-17T02:01:16.754Z] 4065.67 IOPS, 31.76 MiB/s [2024-11-17T02:01:18.129Z] 4074.75 IOPS, 31.83 MiB/s [2024-11-17T02:01:19.063Z] 4081.40 IOPS, 31.89 MiB/s [2024-11-17T02:01:19.997Z] 4081.33 IOPS, 31.89 MiB/s [2024-11-17T02:01:20.930Z] 4084.14 IOPS, 31.91 MiB/s [2024-11-17T02:01:21.866Z] 4081.50 IOPS, 31.89 MiB/s [2024-11-17T02:01:22.802Z] 4081.11 IOPS, 31.88 MiB/s [2024-11-17T02:01:22.802Z] 4089.30 IOPS, 31.95 MiB/s 00:41:14.342 Latency(us) 00:41:14.342 [2024-11-17T02:01:22.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:14.342 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:41:14.342 Verification LBA range: start 0x0 length 0x1000 00:41:14.342 Nvme1n1 : 10.03 4091.51 31.96 0.00 0.00 31198.61 5849.69 41943.04 00:41:14.342 [2024-11-17T02:01:22.802Z] =================================================================================================================== 00:41:14.342 [2024-11-17T02:01:22.802Z] Total : 4091.51 31.96 0.00 0.00 31198.61 5849.69 41943.04 00:41:15.277 03:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3182860 00:41:15.277 03:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:41:15.277 03:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:15.277 03:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:41:15.277 03:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:41:15.277 03:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:41:15.277 03:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:41:15.277 03:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:15.277 03:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:15.277 { 00:41:15.277 "params": { 00:41:15.277 "name": "Nvme$subsystem", 00:41:15.277 "trtype": "$TEST_TRANSPORT", 00:41:15.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:15.277 "adrfam": "ipv4", 00:41:15.277 "trsvcid": "$NVMF_PORT", 00:41:15.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:15.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:15.277 "hdgst": ${hdgst:-false}, 00:41:15.277 "ddgst": ${ddgst:-false} 00:41:15.277 }, 00:41:15.277 "method": "bdev_nvme_attach_controller" 00:41:15.277 } 00:41:15.277 EOF 00:41:15.277 )") 00:41:15.277 03:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:41:15.277 [2024-11-17 03:01:23.683524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.277 [2024-11-17 03:01:23.683584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.277 03:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:41:15.277 03:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:41:15.277 03:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:15.277 "params": { 00:41:15.277 "name": "Nvme1", 00:41:15.277 "trtype": "tcp", 00:41:15.277 "traddr": "10.0.0.2", 00:41:15.277 "adrfam": "ipv4", 00:41:15.277 "trsvcid": "4420", 00:41:15.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:15.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:15.277 "hdgst": false, 00:41:15.277 "ddgst": false 00:41:15.277 }, 00:41:15.277 "method": "bdev_nvme_attach_controller" 00:41:15.277 }' 00:41:15.277 [2024-11-17 03:01:23.691419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.277 [2024-11-17 03:01:23.691474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.277 [2024-11-17 03:01:23.699397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.277 [2024-11-17 03:01:23.699429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.277 [2024-11-17 03:01:23.707396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.277 [2024-11-17 03:01:23.707424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.277 [2024-11-17 03:01:23.715418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.277 [2024-11-17 03:01:23.715464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.277 [2024-11-17 03:01:23.723397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.277 [2024-11-17 03:01:23.723430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.277 [2024-11-17 03:01:23.731393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.277 [2024-11-17 03:01:23.731420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.536 [2024-11-17 03:01:23.739408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.536 [2024-11-17 03:01:23.739456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.536 [2024-11-17 03:01:23.747365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.536 [2024-11-17 03:01:23.747408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.536 [2024-11-17 03:01:23.755377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.536 [2024-11-17 03:01:23.755419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.536 [2024-11-17 03:01:23.763364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.536 [2024-11-17 03:01:23.763410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.536 [2024-11-17 03:01:23.766538] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:41:15.536 [2024-11-17 03:01:23.766652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3182860 ] 00:41:15.536 [2024-11-17 03:01:23.771400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.536 [2024-11-17 03:01:23.771426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.536 [2024-11-17 03:01:23.779404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.536 [2024-11-17 03:01:23.779432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.536 [2024-11-17 03:01:23.787359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.536 [2024-11-17 03:01:23.787406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.536 [2024-11-17 03:01:23.795410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.536 [2024-11-17 03:01:23.795438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.536 [2024-11-17 03:01:23.803375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.536 [2024-11-17 03:01:23.803418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.811395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.811422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.819375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.819415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.827366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.827419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.835392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.835418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.843402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.843430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.851371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.851412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.859395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.859421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.867403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.867435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.875395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.875428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.883394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.883420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.891375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.891416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.899414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.899446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.907404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.907430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.915363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.915408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.917157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:15.537 [2024-11-17 03:01:23.923373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.923414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.931402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.931431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.939429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.939475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.947406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.947440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.955390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.955423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.963424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.963457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.971401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.971433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.979367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.979421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.987404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.987436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.537 [2024-11-17 03:01:23.995403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.537 [2024-11-17 03:01:23.995437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.003432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.003479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.011398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.011430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.019387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.019417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.027395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.027423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.035393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.035419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.043364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.043407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.051408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.051436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.052319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:15.796 [2024-11-17 03:01:24.059412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.059453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.067418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.067454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.075508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.075557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.083418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.083466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.091407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.091433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.099417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.099443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.107408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.107434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.115393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.115419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.123394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.123429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.131368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.131410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.139468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.139516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.147464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.147511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.155488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.155536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.163501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.163546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.171375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.171420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.179419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.179462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.187379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.187429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.195401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.195428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.203410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.203453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.211404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.211431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.219395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.219421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.227398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.227425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.235364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.235405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.243393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.243421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.796 [2024-11-17 03:01:24.251386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.796 [2024-11-17 03:01:24.251433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.259406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.259434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.267396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.267423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.275367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.275418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.283481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.283529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.291493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.291542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.299473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.299519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.307411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.307438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.315410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.315436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.323368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.323410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.331393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.331421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.339370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.339410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.347398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.347425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.355398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.355425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.363369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.363412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.371394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.371421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.379395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.379423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.387403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.387429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.395424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.395453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.403380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.403427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.411413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.411445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.419403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.419432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.427427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.427458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.435407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.435438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.443398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.443427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.451408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.451437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.459396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.459425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.467373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.467415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.475417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.475462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.483388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.483433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.491374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.491402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.499398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.499426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.507394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.507421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.056 [2024-11-17 03:01:24.515372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.056 [2024-11-17 03:01:24.515403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.315 [2024-11-17 03:01:24.523416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.315 [2024-11-17 03:01:24.523444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.315 [2024-11-17 03:01:24.531377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.315 [2024-11-17 03:01:24.531420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.315 [2024-11-17 03:01:24.539383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.315 [2024-11-17 03:01:24.539425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.315 [2024-11-17 03:01:24.547405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.315 [2024-11-17 03:01:24.547432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.315 [2024-11-17 03:01:24.555382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.315 [2024-11-17 03:01:24.555423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.315 [2024-11-17 03:01:24.563411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.315 [2024-11-17 03:01:24.563438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.315 [2024-11-17 03:01:24.571443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.315 [2024-11-17 03:01:24.571472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.315 [2024-11-17 03:01:24.579388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.315 [2024-11-17 03:01:24.579432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.315 [2024-11-17 03:01:24.587388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.315 [2024-11-17 03:01:24.587418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.315 [2024-11-17 03:01:24.595373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.315 [2024-11-17 03:01:24.595418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.315 [2024-11-17 03:01:24.603398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.315 [2024-11-17 03:01:24.603426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.315 [2024-11-17 03:01:24.611403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.315 [2024-11-17 03:01:24.611432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.315 [2024-11-17 03:01:24.619371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.315 [2024-11-17 03:01:24.619417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.315 [2024-11-17 03:01:24.627407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.315 [2024-11-17 03:01:24.627439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.315 [2024-11-17 03:01:24.635413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.315 [2024-11-17 03:01:24.635442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.315 Running I/O for 5 seconds... 00:41:16.315 [2024-11-17 03:01:24.658317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.315 [2024-11-17 03:01:24.658360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.315 [2024-11-17 03:01:24.671610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.315 [2024-11-17 03:01:24.671644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.315 [2024-11-17 03:01:24.686272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.315 [2024-11-17 03:01:24.686309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.315 [2024-11-17 03:01:24.700144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.315 [2024-11-17 03:01:24.700181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.315 [2024-11-17 03:01:24.718628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.315 [2024-11-17 03:01:24.718680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.315 [2024-11-17 03:01:24.731867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.315 [2024-11-17 03:01:24.731902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.315 [2024-11-17 03:01:24.751721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.316 [2024-11-17 03:01:24.751755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.316 [2024-11-17 03:01:24.765502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.316 [2024-11-17 03:01:24.765535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.574 [2024-11-17 03:01:24.780631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.574 [2024-11-17 03:01:24.780666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.574 [2024-11-17 03:01:24.794502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.574 [2024-11-17 03:01:24.794537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.574 [2024-11-17 03:01:24.808811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.574 [2024-11-17 03:01:24.808869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.574 [2024-11-17 03:01:24.823093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.574 [2024-11-17 03:01:24.823141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.574 [2024-11-17 03:01:24.837305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.574 [2024-11-17 03:01:24.837341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.574 [2024-11-17 03:01:24.851204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.574 [2024-11-17 03:01:24.851242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.574 [2024-11-17 03:01:24.865570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.574 [2024-11-17 03:01:24.865603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.574 [2024-11-17 03:01:24.880427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.574 [2024-11-17 03:01:24.880475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.574 [2024-11-17 03:01:24.896330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.574 [2024-11-17 03:01:24.896380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.574 [2024-11-17 03:01:24.908489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.574 [2024-11-17 03:01:24.908522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.574 [2024-11-17 03:01:24.924366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.574 [2024-11-17 03:01:24.924400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.574 [2024-11-17 03:01:24.938364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.574 [2024-11-17 03:01:24.938416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.574 [2024-11-17 03:01:24.952685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.574 [2024-11-17 03:01:24.952718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.574 [2024-11-17 03:01:24.967056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.574 [2024-11-17 03:01:24.967088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.574 [2024-11-17 03:01:24.981103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.574 [2024-11-17 03:01:24.981136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.574 [2024-11-17 03:01:24.995721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.574 [2024-11-17 03:01:24.995754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.574 [2024-11-17 03:01:25.009634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.574 [2024-11-17 03:01:25.009667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.574 [2024-11-17 03:01:25.024009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.574 [2024-11-17 03:01:25.024041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.833 [2024-11-17 03:01:25.040903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.833 [2024-11-17 03:01:25.040938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.833 [2024-11-17 03:01:25.052794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.833 [2024-11-17 03:01:25.052828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.833 [2024-11-17 03:01:25.068875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.833 [2024-11-17 03:01:25.068910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.833 [2024-11-17 03:01:25.083482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.833 [2024-11-17 03:01:25.083542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.833 [2024-11-17 03:01:25.098282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.833 [2024-11-17 03:01:25.098323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.833 [2024-11-17 03:01:25.112540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.833 [2024-11-17 03:01:25.112575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.833 [2024-11-17 03:01:25.128768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.833 [2024-11-17 03:01:25.128804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.833 [2024-11-17 03:01:25.140957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.833 [2024-11-17 03:01:25.140990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.833 [2024-11-17 03:01:25.156482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.833 [2024-11-17 03:01:25.156515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.833 [2024-11-17 03:01:25.170266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.833 [2024-11-17 03:01:25.170300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.833 [2024-11-17 03:01:25.184619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.833 [2024-11-17 03:01:25.184654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.833 [2024-11-17 03:01:25.198797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.833 [2024-11-17 03:01:25.198830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.833 [2024-11-17 03:01:25.212547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.833 [2024-11-17 03:01:25.212580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.833 [2024-11-17 03:01:25.232357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.833 [2024-11-17 03:01:25.232408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.833 [2024-11-17 03:01:25.244982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.833 [2024-11-17 03:01:25.245015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.833 [2024-11-17 03:01:25.260366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.833 [2024-11-17 03:01:25.260416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.833 [2024-11-17 03:01:25.273771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.833 [2024-11-17 03:01:25.273805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.833 [2024-11-17 03:01:25.288209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.833 [2024-11-17 03:01:25.288250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.091 [2024-11-17 03:01:25.303904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.091 [2024-11-17 03:01:25.303955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.091 [2024-11-17 03:01:25.316206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.091 [2024-11-17 03:01:25.316241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.091 [2024-11-17 03:01:25.335230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.091 [2024-11-17 03:01:25.335270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.091 [2024-11-17 03:01:25.348475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.091 [2024-11-17 03:01:25.348508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.091 [2024-11-17 03:01:25.363819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.091 [2024-11-17 03:01:25.363863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.091 [2024-11-17 03:01:25.377336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.091 [2024-11-17 03:01:25.377388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.091 [2024-11-17 03:01:25.391989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.091 [2024-11-17 03:01:25.392022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.091 [2024-11-17 03:01:25.408273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.091 [2024-11-17 03:01:25.408309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.091 [2024-11-17 03:01:25.420662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.091 [2024-11-17 03:01:25.420695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.091 [2024-11-17 03:01:25.436042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.091 [2024-11-17 03:01:25.436090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.091 [2024-11-17 03:01:25.454274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.091 [2024-11-17 03:01:25.454310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.091 [2024-11-17 03:01:25.466740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.091 [2024-11-17 03:01:25.466772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.091 [2024-11-17 03:01:25.482450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.091 [2024-11-17 03:01:25.482483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.092 [2024-11-17 03:01:25.496560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.092 [2024-11-17 03:01:25.496593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.092 [2024-11-17 03:01:25.510019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.092 [2024-11-17 03:01:25.510052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.092 [2024-11-17 03:01:25.524691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.092 [2024-11-17 03:01:25.524725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.092 [2024-11-17 03:01:25.538846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.092 [2024-11-17 03:01:25.538879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.350 [2024-11-17 03:01:25.553318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.350 [2024-11-17 03:01:25.553361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.350 [2024-11-17 03:01:25.567432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.350 [2024-11-17 03:01:25.567479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.350 [2024-11-17 03:01:25.580999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.350 [2024-11-17 03:01:25.581031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.350 [2024-11-17 03:01:25.596209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.350 [2024-11-17 03:01:25.596243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.350 [2024-11-17 03:01:25.615736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.350 [2024-11-17 03:01:25.615770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.350 [2024-11-17 03:01:25.628274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.350 [2024-11-17 03:01:25.628310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.350 8869.00 IOPS, 69.29 MiB/s [2024-11-17T02:01:25.810Z] [2024-11-17 03:01:25.643474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.350 [2024-11-17 03:01:25.643518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.350 [2024-11-17 03:01:25.656593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.350 [2024-11-17 03:01:25.656646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.350 [2024-11-17 03:01:25.672487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.350 [2024-11-17 03:01:25.672526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.350 [2024-11-17 03:01:25.687642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.350 [2024-11-17 03:01:25.687682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.350 [2024-11-17 03:01:25.703201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.350 [2024-11-17 03:01:25.703237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.350 [2024-11-17 03:01:25.718103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.350 [2024-11-17 03:01:25.718142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.350 [2024-11-17 03:01:25.732720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.350 [2024-11-17 03:01:25.732759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.350 [2024-11-17 03:01:25.748602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.350 [2024-11-17 03:01:25.748642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.350 [2024-11-17 03:01:25.763171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.350 [2024-11-17 03:01:25.763205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.350 [2024-11-17 03:01:25.778022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.350 [2024-11-17 03:01:25.778061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.350 [2024-11-17 03:01:25.793672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.350 [2024-11-17 03:01:25.793711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.350 [2024-11-17 03:01:25.808303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.350 [2024-11-17 03:01:25.808338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.609 [2024-11-17 03:01:25.829237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.609 [2024-11-17 03:01:25.829271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.609 [2024-11-17 03:01:25.842270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.609 [2024-11-17 03:01:25.842305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.609 [2024-11-17 03:01:25.859426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.609 [2024-11-17 03:01:25.859467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.609 [2024-11-17 03:01:25.875055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.609 [2024-11-17 03:01:25.875104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.609 [2024-11-17 03:01:25.890286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.609 [2024-11-17 03:01:25.890323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.609 [2024-11-17 03:01:25.905451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.609 [2024-11-17 03:01:25.905491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.609 [2024-11-17 03:01:25.920411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.609 [2024-11-17 03:01:25.920469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.609 [2024-11-17 03:01:25.935554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.609 [2024-11-17 03:01:25.935595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.609 [2024-11-17 03:01:25.951826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.610 [2024-11-17 03:01:25.951865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.610 [2024-11-17 03:01:25.966945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.610 [2024-11-17 03:01:25.966987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.610 [2024-11-17 03:01:25.982216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.610 [2024-11-17 03:01:25.982250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.610 [2024-11-17 03:01:25.997575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.610 [2024-11-17 03:01:25.997618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.610 [2024-11-17 03:01:26.012501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.610 [2024-11-17 03:01:26.012542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.610 [2024-11-17 03:01:26.027567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.610 [2024-11-17 03:01:26.027607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.610 [2024-11-17 03:01:26.042599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.610 [2024-11-17 03:01:26.042640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.610 [2024-11-17 03:01:26.057473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.610 [2024-11-17 03:01:26.057513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.868 [2024-11-17 03:01:26.073524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.868 [2024-11-17 03:01:26.073565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.868 [2024-11-17 03:01:26.089176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.868 [2024-11-17 03:01:26.089226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.868 [2024-11-17 03:01:26.105055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.868 [2024-11-17 03:01:26.105108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.868 [2024-11-17 03:01:26.120511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.868 [2024-11-17 03:01:26.120550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.868 [2024-11-17 03:01:26.135908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.868 [2024-11-17 03:01:26.135946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.868 [2024-11-17 03:01:26.151165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.868 [2024-11-17 03:01:26.151199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.869 [2024-11-17 03:01:26.166610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.869 [2024-11-17 03:01:26.166649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.869 [2024-11-17 03:01:26.181782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.869 [2024-11-17 03:01:26.181820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.869 [2024-11-17 03:01:26.196560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.869 [2024-11-17 03:01:26.196599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.869 [2024-11-17 03:01:26.211976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.869 [2024-11-17 03:01:26.212014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.869 [2024-11-17 03:01:26.227121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.869 [2024-11-17 03:01:26.227174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.869 [2024-11-17 03:01:26.241631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.869 [2024-11-17 03:01:26.241670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.869 [2024-11-17 03:01:26.257494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.869 [2024-11-17 03:01:26.257533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.869 [2024-11-17 03:01:26.272508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.869 [2024-11-17 03:01:26.272547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.869 [2024-11-17 03:01:26.287490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.869 [2024-11-17 03:01:26.287528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.869 [2024-11-17 03:01:26.302869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.869 [2024-11-17 03:01:26.302908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.869 [2024-11-17 03:01:26.318164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.869 [2024-11-17 03:01:26.318197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.128 [2024-11-17 03:01:26.333533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.128 [2024-11-17 03:01:26.333572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.128 [2024-11-17 03:01:26.348178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.128 [2024-11-17 03:01:26.348211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.128 [2024-11-17 03:01:26.363332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.128 [2024-11-17 03:01:26.363366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.128 [2024-11-17 03:01:26.378943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.128 [2024-11-17 03:01:26.378982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.128 [2024-11-17 03:01:26.394518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.128 [2024-11-17 03:01:26.394577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.128 [2024-11-17 03:01:26.409468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.128 [2024-11-17 03:01:26.409507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.128 [2024-11-17 03:01:26.424549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.128 [2024-11-17 03:01:26.424587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.128 [2024-11-17 03:01:26.439330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.128 [2024-11-17 03:01:26.439362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.128 [2024-11-17 03:01:26.454340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.128 [2024-11-17 03:01:26.454388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.128 [2024-11-17 03:01:26.469210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.128 [2024-11-17 03:01:26.469243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.128 [2024-11-17 03:01:26.484418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.128 [2024-11-17 03:01:26.484457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.128 [2024-11-17 03:01:26.499744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.128 [2024-11-17 03:01:26.499783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.128 [2024-11-17 03:01:26.514347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.128 [2024-11-17 03:01:26.514399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.128 [2024-11-17 03:01:26.528654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.128 [2024-11-17 03:01:26.528692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.128 [2024-11-17 03:01:26.544228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.128 [2024-11-17 03:01:26.544260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.128 [2024-11-17 03:01:26.558902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.128 [2024-11-17 03:01:26.558941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.128 [2024-11-17 03:01:26.574951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.128 [2024-11-17 03:01:26.574991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.387 [2024-11-17 03:01:26.590235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.387 [2024-11-17 03:01:26.590286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.387 [2024-11-17 03:01:26.604963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.387 [2024-11-17 03:01:26.605002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.387 [2024-11-17 03:01:26.618609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.387 [2024-11-17 03:01:26.618648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.387 [2024-11-17 03:01:26.633640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.387 [2024-11-17 03:01:26.633680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.387 8638.00 IOPS, 67.48 MiB/s [2024-11-17T02:01:26.847Z] [2024-11-17 03:01:26.648268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.387 [2024-11-17 03:01:26.648301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.387 [2024-11-17 03:01:26.662626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.387 [2024-11-17 03:01:26.662666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.387 [2024-11-17 03:01:26.677822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.387 [2024-11-17 03:01:26.677861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.387 [2024-11-17 03:01:26.692838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.387 [2024-11-17 03:01:26.692879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.387 [2024-11-17 03:01:26.707888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.387 [2024-11-17 03:01:26.707927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.387 [2024-11-17 03:01:26.723106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.387 [2024-11-17 03:01:26.723173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.387 [2024-11-17 03:01:26.739363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.387 [2024-11-17 03:01:26.739442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.387 [2024-11-17 03:01:26.754965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.387 [2024-11-17 03:01:26.755020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.387 [2024-11-17 03:01:26.770549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.387 [2024-11-17 03:01:26.770600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.387 [2024-11-17 03:01:26.786159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.387 [2024-11-17 03:01:26.786204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.387 [2024-11-17 03:01:26.801117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.387 [2024-11-17 03:01:26.801167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.387 [2024-11-17 03:01:26.815447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.387 [2024-11-17 03:01:26.815486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.387 [2024-11-17 03:01:26.831016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.387 [2024-11-17 03:01:26.831055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.387 [2024-11-17 03:01:26.846495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.646 [2024-11-17 03:01:26.846535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.646 [2024-11-17 03:01:26.861959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.646 [2024-11-17 03:01:26.861998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.646 [2024-11-17 03:01:26.877312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.646 [2024-11-17 03:01:26.877348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.646 [2024-11-17 03:01:26.892694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.646 [2024-11-17 03:01:26.892734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.646 [2024-11-17 03:01:26.907677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.646 [2024-11-17 03:01:26.907716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.646 [2024-11-17 03:01:26.922020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.646 [2024-11-17 03:01:26.922059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.646 [2024-11-17 03:01:26.936984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.646 [2024-11-17 03:01:26.937023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.646 [2024-11-17 03:01:26.951634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.646 [2024-11-17 03:01:26.951673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.646 [2024-11-17 03:01:26.966727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.646 [2024-11-17 03:01:26.966766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.646 [2024-11-17 03:01:26.982124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.646 [2024-11-17 03:01:26.982173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.646 [2024-11-17 03:01:26.997565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.646 [2024-11-17 03:01:26.997604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.646 [2024-11-17 03:01:27.013247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.646 [2024-11-17 03:01:27.013281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.646 [2024-11-17 03:01:27.028760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.646 [2024-11-17 03:01:27.028800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.646 [2024-11-17 03:01:27.044919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.646 [2024-11-17 03:01:27.044959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.646 [2024-11-17 03:01:27.059923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.646 [2024-11-17 03:01:27.059962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.646 [2024-11-17 03:01:27.077475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.646 [2024-11-17 03:01:27.077524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.646 [2024-11-17 03:01:27.091040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.646 [2024-11-17 03:01:27.091080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.905 [2024-11-17 03:01:27.108704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.905 [2024-11-17 03:01:27.108745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.905 [2024-11-17 03:01:27.124503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.905 [2024-11-17 03:01:27.124555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.905 [2024-11-17 03:01:27.140230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.905 [2024-11-17 03:01:27.140263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.905 [2024-11-17 03:01:27.155886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.905 [2024-11-17 03:01:27.155924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.905 [2024-11-17 03:01:27.171637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.905 [2024-11-17 03:01:27.171677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.905 [2024-11-17 03:01:27.187790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.905 [2024-11-17 03:01:27.187831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.905 [2024-11-17 03:01:27.202789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.905 [2024-11-17 03:01:27.202829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.905 [2024-11-17 03:01:27.217831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.905 [2024-11-17 03:01:27.217870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.905 [2024-11-17 03:01:27.233732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.905 [2024-11-17 03:01:27.233772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.905 [2024-11-17 03:01:27.248641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.905 [2024-11-17 03:01:27.248680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.905 [2024-11-17 03:01:27.263908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.905 [2024-11-17 03:01:27.263947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.905 [2024-11-17 03:01:27.278700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.905 [2024-11-17 03:01:27.278739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.905 [2024-11-17 03:01:27.294572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.905 [2024-11-17 03:01:27.294611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.905 [2024-11-17 03:01:27.309547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.905 [2024-11-17 03:01:27.309586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.905 [2024-11-17 03:01:27.325156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.905 [2024-11-17 03:01:27.325190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.905 [2024-11-17 03:01:27.340249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.905 [2024-11-17 03:01:27.340284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.905 [2024-11-17 03:01:27.355287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.905 [2024-11-17 03:01:27.355321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.164 [2024-11-17 03:01:27.370109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.164 [2024-11-17 03:01:27.370169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.164 [2024-11-17 03:01:27.385910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.164 [2024-11-17 03:01:27.385949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.164 [2024-11-17 03:01:27.401912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.164 [2024-11-17 03:01:27.401951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.164 [2024-11-17 03:01:27.416842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.164 [2024-11-17 03:01:27.416881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.164 [2024-11-17 03:01:27.432758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.164 [2024-11-17 03:01:27.432798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.164 [2024-11-17 03:01:27.449350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.164 [2024-11-17 03:01:27.449401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.164 [2024-11-17 03:01:27.464118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.164 [2024-11-17 03:01:27.464168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.164 [2024-11-17 03:01:27.479068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.164 [2024-11-17 03:01:27.479123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.164 [2024-11-17 03:01:27.494216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.164 [2024-11-17 03:01:27.494249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.164 [2024-11-17 03:01:27.510251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.164 [2024-11-17 03:01:27.510283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.164 [2024-11-17 03:01:27.524898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.164 [2024-11-17 03:01:27.524937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.164 [2024-11-17 03:01:27.539757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.164 [2024-11-17 03:01:27.539795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.164 [2024-11-17 03:01:27.553952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.164 [2024-11-17 03:01:27.553991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.164 [2024-11-17 03:01:27.569542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.164 [2024-11-17 03:01:27.569581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.164 [2024-11-17 03:01:27.584553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.164 [2024-11-17 03:01:27.584592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.164 [2024-11-17 03:01:27.600346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.164 [2024-11-17 03:01:27.600401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.164 [2024-11-17 03:01:27.615506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.164 [2024-11-17 03:01:27.615546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.508 [2024-11-17 03:01:27.630220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.508 [2024-11-17 03:01:27.630254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.508 [2024-11-17 03:01:27.645436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.508 [2024-11-17 03:01:27.645475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.508 8524.67 IOPS, 66.60 MiB/s [2024-11-17T02:01:27.968Z] [2024-11-17 03:01:27.660683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.508 [2024-11-17 03:01:27.660722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.508 [2024-11-17 03:01:27.675469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.508 [2024-11-17 03:01:27.675508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.508 [2024-11-17 03:01:27.689202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.508 [2024-11-17 03:01:27.689236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.508 [2024-11-17 03:01:27.705418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.508 [2024-11-17 03:01:27.705458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.508 [2024-11-17 03:01:27.720112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.508 [2024-11-17 03:01:27.720172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.508 [2024-11-17 03:01:27.735760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.508 [2024-11-17 03:01:27.735799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.508 [2024-11-17 03:01:27.751149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.508 [2024-11-17 03:01:27.751184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.508 [2024-11-17 03:01:27.766513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.508 [2024-11-17 03:01:27.766554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.508 [2024-11-17 03:01:27.781672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.508 [2024-11-17 03:01:27.781712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.508 [2024-11-17 03:01:27.796574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.508 [2024-11-17 03:01:27.796613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.508 [2024-11-17 03:01:27.810888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.508 [2024-11-17 03:01:27.810927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.508 [2024-11-17 03:01:27.825497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.508 [2024-11-17 03:01:27.825537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.508 [2024-11-17 03:01:27.841628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.508 [2024-11-17 03:01:27.841666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.508 [2024-11-17 03:01:27.856611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.508 [2024-11-17 03:01:27.856651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.508 [2024-11-17 03:01:27.871900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.508 [2024-11-17 03:01:27.871938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.508 [2024-11-17 03:01:27.891741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.508 [2024-11-17 03:01:27.891806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.508 [2024-11-17 03:01:27.908418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.508 [2024-11-17 03:01:27.908459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.508 [2024-11-17 03:01:27.928597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.508 [2024-11-17 03:01:27.928637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.786 [2024-11-17 03:01:27.944992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.786 [2024-11-17 03:01:27.945036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.786 [2024-11-17 03:01:27.962405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.786 [2024-11-17 03:01:27.962438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.786 [2024-11-17 03:01:27.978562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.786 [2024-11-17 03:01:27.978602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.786 [2024-11-17 03:01:27.993229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.786 [2024-11-17 03:01:27.993263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.786 [2024-11-17 03:01:28.008049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.786 [2024-11-17 03:01:28.008088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.786 [2024-11-17 03:01:28.028158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.786 [2024-11-17 03:01:28.028192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.786 [2024-11-17 03:01:28.042271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.786 [2024-11-17 03:01:28.042304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.786 [2024-11-17 03:01:28.059075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.786 [2024-11-17 03:01:28.059139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.786 [2024-11-17 03:01:28.073445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.786 [2024-11-17 03:01:28.073485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.786 [2024-11-17 03:01:28.088925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.786 [2024-11-17 03:01:28.088964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.786 [2024-11-17 03:01:28.103985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.786 [2024-11-17 03:01:28.104025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.786 [2024-11-17 03:01:28.118596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.786 [2024-11-17 03:01:28.118637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.786 [2024-11-17 03:01:28.133813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.786 [2024-11-17 03:01:28.133852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.786 [2024-11-17 03:01:28.149041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.786 [2024-11-17 03:01:28.149091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.786 [2024-11-17 03:01:28.163519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.786 [2024-11-17 03:01:28.163558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.786 [2024-11-17 03:01:28.180322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.786 [2024-11-17 03:01:28.180359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.787 [2024-11-17 03:01:28.196714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.787 [2024-11-17 03:01:28.196764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.787 [2024-11-17 03:01:28.211900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.787 [2024-11-17 03:01:28.211938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.787 [2024-11-17 03:01:28.227033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.787 [2024-11-17 03:01:28.227065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.787 [2024-11-17 03:01:28.241886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.787 [2024-11-17 03:01:28.241930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.045 [2024-11-17 03:01:28.258859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.045 [2024-11-17 03:01:28.258899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.045 [2024-11-17 03:01:28.274005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.045 [2024-11-17 03:01:28.274044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.045 [2024-11-17 03:01:28.289833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.045 [2024-11-17 03:01:28.289872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.045 [2024-11-17 03:01:28.305485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.045 [2024-11-17 03:01:28.305525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.045 [2024-11-17 03:01:28.321230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.045 [2024-11-17 03:01:28.321264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.045 [2024-11-17 03:01:28.336340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.045 [2024-11-17 03:01:28.336392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.045 [2024-11-17 03:01:28.351296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.046 [2024-11-17 03:01:28.351331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.046 [2024-11-17 03:01:28.367013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.046 [2024-11-17 03:01:28.367053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.046 [2024-11-17 03:01:28.382465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.046 [2024-11-17 03:01:28.382506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.046 [2024-11-17 03:01:28.397819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.046 [2024-11-17 03:01:28.397859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.046 [2024-11-17 03:01:28.413051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.046 [2024-11-17 03:01:28.413090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.046 [2024-11-17 03:01:28.428198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.046 [2024-11-17 03:01:28.428231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.046 [2024-11-17 03:01:28.443841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.046 [2024-11-17 03:01:28.443880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.046 [2024-11-17 03:01:28.458666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.046 [2024-11-17 03:01:28.458705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.046 [2024-11-17 03:01:28.473491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.046 [2024-11-17 03:01:28.473531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.046 [2024-11-17 03:01:28.489215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.046 [2024-11-17 03:01:28.489248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.046 [2024-11-17 03:01:28.505070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.046 [2024-11-17 03:01:28.505124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.304 [2024-11-17 03:01:28.520675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.304 [2024-11-17 03:01:28.520715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.304 [2024-11-17 03:01:28.535569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.304 [2024-11-17 03:01:28.535621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.304 [2024-11-17 03:01:28.550045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.304 [2024-11-17 03:01:28.550084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.304 [2024-11-17 03:01:28.564858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.304 [2024-11-17 03:01:28.564898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.304 [2024-11-17 03:01:28.579924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.304 [2024-11-17 03:01:28.579964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.304 [2024-11-17 03:01:28.594469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.304 [2024-11-17 03:01:28.594508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.304 [2024-11-17 03:01:28.609915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.304 [2024-11-17 03:01:28.609956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.304 [2024-11-17 03:01:28.625236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.304 [2024-11-17 03:01:28.625269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.304 [2024-11-17 03:01:28.640725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.304 [2024-11-17 03:01:28.640764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.304 8466.25 IOPS, 66.14 MiB/s [2024-11-17T02:01:28.764Z] [2024-11-17 03:01:28.655472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.304 [2024-11-17 03:01:28.655511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.304 [2024-11-17 03:01:28.670080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.304 [2024-11-17 03:01:28.670146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.304 [2024-11-17 03:01:28.685943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.304 [2024-11-17 03:01:28.685982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.304 [2024-11-17 03:01:28.700503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.304 [2024-11-17 03:01:28.700542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.305 [2024-11-17 03:01:28.714585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.305 [2024-11-17 03:01:28.714624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.305 [2024-11-17 03:01:28.730797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.305 [2024-11-17 03:01:28.730837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.305 [2024-11-17 03:01:28.746162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.305 [2024-11-17 03:01:28.746196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.305 [2024-11-17 03:01:28.762839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.305 [2024-11-17 03:01:28.762880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.563 [2024-11-17 03:01:28.778962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.563 [2024-11-17 03:01:28.779002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.563 [2024-11-17 03:01:28.793864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.563 [2024-11-17 03:01:28.793903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.563 [2024-11-17 03:01:28.809112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.563 [2024-11-17 03:01:28.809166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.563 [2024-11-17 03:01:28.824634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.563 [2024-11-17 03:01:28.824687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.563 [2024-11-17 03:01:28.839876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.563 [2024-11-17 03:01:28.839915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.563 [2024-11-17 03:01:28.854887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.563 [2024-11-17 03:01:28.854926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.563 [2024-11-17 03:01:28.870418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.563 [2024-11-17 03:01:28.870470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.563 [2024-11-17 03:01:28.886635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.563 [2024-11-17 03:01:28.886675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.563 [2024-11-17 03:01:28.902335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.563 [2024-11-17 03:01:28.902368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.563 [2024-11-17 03:01:28.917856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.563 [2024-11-17 03:01:28.917896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.563 [2024-11-17 03:01:28.933776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.563 [2024-11-17 03:01:28.933815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.563 [2024-11-17 03:01:28.949219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.563 [2024-11-17 03:01:28.949253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.563 [2024-11-17 03:01:28.964296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.563 [2024-11-17 03:01:28.964330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.563 [2024-11-17 03:01:28.979205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.563 [2024-11-17 03:01:28.979240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.563 [2024-11-17 03:01:28.994076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.563 [2024-11-17 03:01:28.994156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.563 [2024-11-17 03:01:29.008646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.563 [2024-11-17 03:01:29.008685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.563 [2024-11-17 03:01:29.022721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.563 [2024-11-17 03:01:29.022763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.822 [2024-11-17 03:01:29.039543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.822 [2024-11-17 03:01:29.039582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.822 [2024-11-17 03:01:29.054340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.822 [2024-11-17 03:01:29.054388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.822 [2024-11-17 03:01:29.069189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.822 [2024-11-17 03:01:29.069224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.822 [2024-11-17 03:01:29.083950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.822 [2024-11-17 03:01:29.083991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.822 [2024-11-17 03:01:29.100575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.822 [2024-11-17 03:01:29.100617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.822 [2024-11-17 03:01:29.116976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.822 [2024-11-17 03:01:29.117028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.822 [2024-11-17 03:01:29.133003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.822 [2024-11-17 03:01:29.133044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.822 [2024-11-17 03:01:29.148924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.822 [2024-11-17 03:01:29.148961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.822 [2024-11-17 03:01:29.163296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.822 [2024-11-17 03:01:29.163331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.822 [2024-11-17 03:01:29.177280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.822 [2024-11-17 03:01:29.177315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.823 [2024-11-17 03:01:29.191994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.823 [2024-11-17 03:01:29.192026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.823 [2024-11-17 03:01:29.211508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.823 [2024-11-17 03:01:29.211557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.823 [2024-11-17 03:01:29.223987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.823 [2024-11-17 03:01:29.224020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.823 [2024-11-17 03:01:29.239072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.823 [2024-11-17 03:01:29.239131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.823 [2024-11-17 03:01:29.253202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.823 [2024-11-17 03:01:29.253238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.823 [2024-11-17 03:01:29.267139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.823 [2024-11-17 03:01:29.267191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.823 [2024-11-17 03:01:29.281686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.823 [2024-11-17 03:01:29.281720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.080 [2024-11-17 03:01:29.296077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.080 [2024-11-17 03:01:29.296135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.080 [2024-11-17 03:01:29.310008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.080 [2024-11-17 03:01:29.310053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.080 [2024-11-17 03:01:29.325173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.080 [2024-11-17 03:01:29.325210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.080 [2024-11-17 03:01:29.339903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.080 [2024-11-17 03:01:29.339937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.080 [2024-11-17 03:01:29.358950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.080 [2024-11-17 03:01:29.358986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.080 [2024-11-17 03:01:29.371179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.080 [2024-11-17 03:01:29.371214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.080 [2024-11-17 03:01:29.386617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.080 [2024-11-17 03:01:29.386651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.080 [2024-11-17 03:01:29.400013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.080 [2024-11-17 03:01:29.400047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.080 [2024-11-17 03:01:29.418667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.080 [2024-11-17 03:01:29.418701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.080 [2024-11-17 03:01:29.430948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.080 [2024-11-17 03:01:29.430981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.080 [2024-11-17 03:01:29.446732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.080 [2024-11-17 03:01:29.446766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.080 [2024-11-17 03:01:29.460546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.080 [2024-11-17 03:01:29.460579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.080 [2024-11-17 03:01:29.475094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.080 [2024-11-17 03:01:29.475140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.080 [2024-11-17 03:01:29.489506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.080 [2024-11-17 03:01:29.489539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.080 [2024-11-17 03:01:29.503631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.080 [2024-11-17 03:01:29.503665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.080 [2024-11-17 03:01:29.518418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.080 [2024-11-17 03:01:29.518451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.080 [2024-11-17 03:01:29.531756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.080 [2024-11-17 03:01:29.531788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.547007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.547056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.561178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.561214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.575680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.575714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.590243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.590281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.604540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.604588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.623367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.623421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.635465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.635497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.651274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.651310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 8487.80 IOPS, 66.31 MiB/s [2024-11-17T02:01:29.798Z] [2024-11-17 03:01:29.663199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.663238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 00:41:21.338 Latency(us) 00:41:21.338 [2024-11-17T02:01:29.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:21.338 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:41:21.338 Nvme1n1 : 5.01 8492.03 66.34 0.00 0.00 15050.79 6092.42 27962.03 00:41:21.338 [2024-11-17T02:01:29.798Z] =================================================================================================================== 00:41:21.338 [2024-11-17T02:01:29.798Z] Total : 8492.03 66.34 0.00 0.00 15050.79 6092.42 27962.03 00:41:21.338 [2024-11-17 03:01:29.667406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.667435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.675405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.675436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.683380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.683427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.691400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.691429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.699398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.699428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.707366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.707410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.715561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.715625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.723520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.723586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.731482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.731514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.739395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.739423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.747369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.747412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.755394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.755421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.763394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.763422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.771361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.771405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.779397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.779424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.787365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.787416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.338 [2024-11-17 03:01:29.795408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.338 [2024-11-17 03:01:29.795442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.803533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.803597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.811505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.811563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.819457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.819497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.827430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.827473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.835368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.835409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.843394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.843422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.851363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.851406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.859394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.859422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.867395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.867422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.875405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.875433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.883395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.883423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.891394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.891421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.899405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.899432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.907410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.907438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.915362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.915405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.923398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.923426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.931413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.931441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.939363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.939414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.947402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.947430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.955500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.955555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.963473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.963534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.971420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.971463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.983366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.983411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.991402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.991431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:29.999395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:29.999423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:30.007433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:30.007482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:30.015547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:30.015614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:30.023542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:30.023603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:30.031495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:30.031564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:30.039533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:30.039596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:30.047380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:30.047423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.597 [2024-11-17 03:01:30.055401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.597 [2024-11-17 03:01:30.055431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.063402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.063431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.071369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.071411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.079403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.079431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.087399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.087428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.095375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.095427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.103520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.103547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.111365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.111394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.119377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.119405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.127400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.127428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.135370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.135399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.143410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.143454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.151401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.151430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.159369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.159397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.167407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.167449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.175364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.175418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.183465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.183516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.191502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.191563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.199408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.199451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.207382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.207410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.215399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.215426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.223362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.223404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.231402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.231429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.239370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.239413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.247407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.247460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.255412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.255455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.263372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.263400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.271383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.271425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.279405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.279433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.287367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.287396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.295417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.295460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.303468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.303525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.857 [2024-11-17 03:01:30.311453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.857 [2024-11-17 03:01:30.311510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.116 [2024-11-17 03:01:30.319428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.116 [2024-11-17 03:01:30.319476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.116 [2024-11-17 03:01:30.327399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.116 [2024-11-17 03:01:30.327427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.116 [2024-11-17 03:01:30.335400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.116 [2024-11-17 03:01:30.335429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.116 [2024-11-17 03:01:30.343399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.116 [2024-11-17 03:01:30.343427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.116 [2024-11-17 03:01:30.351373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.116 [2024-11-17 03:01:30.351416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.116 [2024-11-17 03:01:30.359410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.116 [2024-11-17 03:01:30.359438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.116 [2024-11-17 03:01:30.367370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.116 [2024-11-17 03:01:30.367414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.116 [2024-11-17 03:01:30.375400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.116 [2024-11-17 03:01:30.375428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.116 [2024-11-17 03:01:30.383378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.116 [2024-11-17 03:01:30.383406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.116 [2024-11-17 03:01:30.391488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.116 [2024-11-17 03:01:30.391545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.116 [2024-11-17 03:01:30.399472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.116 [2024-11-17 03:01:30.399518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.116 [2024-11-17 03:01:30.407401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.116 [2024-11-17 03:01:30.407429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.116 [2024-11-17 03:01:30.415372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.116 [2024-11-17 03:01:30.415414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.116 [2024-11-17 03:01:30.423378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.116 [2024-11-17 03:01:30.423421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.116 [2024-11-17 03:01:30.431372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.116 [2024-11-17 03:01:30.431415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.116 [2024-11-17 03:01:30.439400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.116 [2024-11-17 03:01:30.439428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.116 [2024-11-17 03:01:30.447408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.116 [2024-11-17 03:01:30.447435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.116 [2024-11-17 03:01:30.455374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.116 [2024-11-17 03:01:30.455418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.116 [2024-11-17 03:01:30.463409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.116 [2024-11-17 03:01:30.463463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.116 [2024-11-17 03:01:30.471399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.116 [2024-11-17 03:01:30.471427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.116 [2024-11-17 03:01:30.479373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.116 [2024-11-17 03:01:30.479416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.116 [2024-11-17 03:01:30.487430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.117 [2024-11-17 03:01:30.487478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.117 [2024-11-17 03:01:30.495368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.117 [2024-11-17 03:01:30.495397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.117 [2024-11-17 03:01:30.503403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.117 [2024-11-17 03:01:30.503431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.117 [2024-11-17 03:01:30.511398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.117 [2024-11-17 03:01:30.511426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3182860) - No such process 00:41:22.117 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3182860 00:41:22.117 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:22.117 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.117 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:22.117 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.117 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:41:22.117 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.117 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:22.117 delay0 00:41:22.117 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.117 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:41:22.117 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.117 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:22.117 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.117 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:41:22.375 [2024-11-17 03:01:30.657977] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:41:30.485 Initializing NVMe Controllers 00:41:30.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:30.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:30.485 Initialization complete. Launching workers. 00:41:30.485 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 229, failed: 15557 00:41:30.486 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 15652, failed to submit 134 00:41:30.486 success 15593, unsuccessful 59, failed 0 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:30.486 rmmod nvme_tcp 00:41:30.486 rmmod nvme_fabrics 00:41:30.486 rmmod nvme_keyring 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3181288 ']' 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3181288 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3181288 ']' 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3181288 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3181288 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3181288' 00:41:30.486 killing process with pid 3181288 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3181288 00:41:30.486 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3181288 00:41:30.744 03:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:30.744 03:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:30.744 03:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:30.744 03:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:41:30.744 03:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:41:30.744 03:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:30.744 03:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:41:30.744 03:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:30.744 03:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:30.744 03:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:30.744 03:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:30.744 03:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:33.276 00:41:33.276 real 0m32.783s 00:41:33.276 user 0m47.305s 00:41:33.276 sys 0m10.005s 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:33.276 ************************************ 00:41:33.276 END TEST nvmf_zcopy 00:41:33.276 ************************************ 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:33.276 ************************************ 00:41:33.276 START TEST nvmf_nmic 00:41:33.276 ************************************ 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:33.276 * Looking for test storage... 00:41:33.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:33.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.276 --rc genhtml_branch_coverage=1 00:41:33.276 --rc genhtml_function_coverage=1 00:41:33.276 --rc genhtml_legend=1 00:41:33.276 --rc geninfo_all_blocks=1 00:41:33.276 --rc geninfo_unexecuted_blocks=1 00:41:33.276 00:41:33.276 ' 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:33.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.276 --rc genhtml_branch_coverage=1 00:41:33.276 --rc genhtml_function_coverage=1 00:41:33.276 --rc genhtml_legend=1 00:41:33.276 --rc geninfo_all_blocks=1 00:41:33.276 --rc geninfo_unexecuted_blocks=1 00:41:33.276 00:41:33.276 ' 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:33.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.276 --rc genhtml_branch_coverage=1 00:41:33.276 --rc genhtml_function_coverage=1 00:41:33.276 --rc genhtml_legend=1 00:41:33.276 --rc geninfo_all_blocks=1 00:41:33.276 --rc geninfo_unexecuted_blocks=1 00:41:33.276 00:41:33.276 ' 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:33.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.276 --rc genhtml_branch_coverage=1 00:41:33.276 --rc genhtml_function_coverage=1 00:41:33.276 --rc genhtml_legend=1 00:41:33.276 --rc geninfo_all_blocks=1 00:41:33.276 --rc geninfo_unexecuted_blocks=1 00:41:33.276 00:41:33.276 ' 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:41:33.276 03:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:35.181 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:35.181 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:41:35.181 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:35.181 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:35.181 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:35.181 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:35.181 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:35.181 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:41:35.181 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:35.181 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:35.182 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:35.182 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:35.182 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:35.182 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:35.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:35.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:41:35.182 00:41:35.182 --- 10.0.0.2 ping statistics --- 00:41:35.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:35.182 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:35.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:35.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:41:35.182 00:41:35.182 --- 10.0.0.1 ping statistics --- 00:41:35.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:35.182 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:35.182 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:35.183 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:35.183 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:35.183 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:35.183 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:35.183 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:35.442 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:41:35.442 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:35.442 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:35.442 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:35.442 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3186513 00:41:35.442 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:35.442 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3186513 00:41:35.442 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3186513 ']' 00:41:35.442 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:35.442 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:35.442 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:35.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:35.442 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:35.442 03:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:35.442 [2024-11-17 03:01:43.757948] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:35.442 [2024-11-17 03:01:43.761047] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:41:35.442 [2024-11-17 03:01:43.761178] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:35.700 [2024-11-17 03:01:43.933941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:35.700 [2024-11-17 03:01:44.076148] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:35.700 [2024-11-17 03:01:44.076228] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:35.700 [2024-11-17 03:01:44.076257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:35.700 [2024-11-17 03:01:44.076279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:35.700 [2024-11-17 03:01:44.076301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:35.700 [2024-11-17 03:01:44.079047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:35.700 [2024-11-17 03:01:44.079129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:35.700 [2024-11-17 03:01:44.079222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:35.700 [2024-11-17 03:01:44.079247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:36.267 [2024-11-17 03:01:44.441479] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:36.267 [2024-11-17 03:01:44.454393] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:36.267 [2024-11-17 03:01:44.454534] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:36.267 [2024-11-17 03:01:44.455322] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:36.267 [2024-11-17 03:01:44.455650] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:36.526 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:36.526 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:41:36.526 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:36.526 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:36.526 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:36.526 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:36.526 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:36.526 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.526 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:36.526 [2024-11-17 03:01:44.784286] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:36.526 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.526 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:36.526 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:36.527 Malloc0 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:36.527 [2024-11-17 03:01:44.900527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:41:36.527 test case1: single bdev can't be used in multiple subsystems 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:36.527 [2024-11-17 03:01:44.924143] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:41:36.527 [2024-11-17 03:01:44.924192] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:41:36.527 [2024-11-17 03:01:44.924223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:36.527 request: 00:41:36.527 { 00:41:36.527 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:41:36.527 "namespace": { 00:41:36.527 "bdev_name": "Malloc0", 00:41:36.527 "no_auto_visible": false 00:41:36.527 }, 00:41:36.527 "method": "nvmf_subsystem_add_ns", 00:41:36.527 "req_id": 1 00:41:36.527 } 00:41:36.527 Got JSON-RPC error response 00:41:36.527 response: 00:41:36.527 { 00:41:36.527 "code": -32602, 00:41:36.527 "message": "Invalid parameters" 00:41:36.527 } 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:41:36.527 Adding namespace failed - expected result. 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:41:36.527 test case2: host connect to nvmf target in multiple paths 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:36.527 [2024-11-17 03:01:44.932254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.527 03:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:36.785 03:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:41:37.043 03:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:41:37.043 03:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:41:37.043 03:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:37.043 03:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:41:37.043 03:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:41:38.941 03:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:38.941 03:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:38.942 03:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:38.942 03:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:41:38.942 03:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:38.942 03:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:41:38.942 03:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:38.942 [global] 00:41:38.942 thread=1 00:41:38.942 invalidate=1 00:41:38.942 rw=write 00:41:38.942 time_based=1 00:41:38.942 runtime=1 00:41:38.942 ioengine=libaio 00:41:38.942 direct=1 00:41:38.942 bs=4096 00:41:38.942 iodepth=1 00:41:38.942 norandommap=0 00:41:38.942 numjobs=1 00:41:38.942 00:41:38.942 verify_dump=1 00:41:38.942 verify_backlog=512 00:41:38.942 verify_state_save=0 00:41:38.942 do_verify=1 00:41:38.942 verify=crc32c-intel 00:41:38.942 [job0] 00:41:38.942 filename=/dev/nvme0n1 00:41:38.942 Could not set queue depth (nvme0n1) 00:41:39.200 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:39.200 fio-3.35 00:41:39.200 Starting 1 thread 00:41:40.573 00:41:40.573 job0: (groupid=0, jobs=1): err= 0: pid=3187021: Sun Nov 17 03:01:48 2024 00:41:40.573 read: IOPS=21, BW=85.2KiB/s (87.2kB/s)(88.0KiB/1033msec) 00:41:40.573 slat (nsec): min=9252, max=37313, avg=26604.05, stdev=8659.55 00:41:40.573 clat (usec): min=40520, max=42020, avg=41304.73, stdev=525.32 00:41:40.573 lat (usec): min=40529, max=42037, avg=41331.33, stdev=526.39 00:41:40.573 clat percentiles (usec): 00:41:40.573 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:41:40.573 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:40.573 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:41:40.573 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:40.573 | 99.99th=[42206] 00:41:40.573 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:41:40.573 slat (nsec): min=7416, max=50976, avg=13228.45, stdev=6619.23 00:41:40.573 clat (usec): min=184, max=459, avg=224.51, stdev=28.14 00:41:40.573 lat (usec): min=192, max=489, avg=237.74, stdev=31.34 00:41:40.573 clat percentiles (usec): 00:41:40.573 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 198], 00:41:40.573 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 225], 60.00th=[ 233], 00:41:40.573 | 70.00th=[ 239], 80.00th=[ 249], 90.00th=[ 262], 95.00th=[ 269], 00:41:40.573 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 461], 99.95th=[ 461], 00:41:40.573 | 99.99th=[ 461] 00:41:40.573 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:41:40.573 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:40.573 lat (usec) : 250=78.46%, 500=17.42% 00:41:40.573 lat (msec) : 50=4.12% 00:41:40.573 cpu : usr=0.58%, sys=0.78%, ctx=534, majf=0, minf=1 00:41:40.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:40.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:40.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:40.573 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:40.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:40.573 00:41:40.573 Run status group 0 (all jobs): 00:41:40.573 READ: bw=85.2KiB/s (87.2kB/s), 85.2KiB/s-85.2KiB/s (87.2kB/s-87.2kB/s), io=88.0KiB (90.1kB), run=1033-1033msec 00:41:40.573 WRITE: bw=1983KiB/s (2030kB/s), 1983KiB/s-1983KiB/s (2030kB/s-2030kB/s), io=2048KiB (2097kB), run=1033-1033msec 00:41:40.573 00:41:40.573 Disk stats (read/write): 00:41:40.573 nvme0n1: ios=68/512, merge=0/0, ticks=772/105, in_queue=877, util=91.78% 00:41:40.573 03:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:40.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:40.832 rmmod nvme_tcp 00:41:40.832 rmmod nvme_fabrics 00:41:40.832 rmmod nvme_keyring 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3186513 ']' 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3186513 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3186513 ']' 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3186513 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3186513 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3186513' 00:41:40.832 killing process with pid 3186513 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3186513 00:41:40.832 03:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3186513 00:41:42.206 03:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:42.206 03:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:42.206 03:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:42.206 03:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:41:42.206 03:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:41:42.206 03:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:42.206 03:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:41:42.206 03:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:42.206 03:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:42.206 03:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:42.206 03:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:42.206 03:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:44.739 00:41:44.739 real 0m11.356s 00:41:44.739 user 0m19.531s 00:41:44.739 sys 0m3.672s 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:44.739 ************************************ 00:41:44.739 END TEST nvmf_nmic 00:41:44.739 ************************************ 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:44.739 ************************************ 00:41:44.739 START TEST nvmf_fio_target 00:41:44.739 ************************************ 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:41:44.739 * Looking for test storage... 00:41:44.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:44.739 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:44.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:44.740 --rc genhtml_branch_coverage=1 00:41:44.740 --rc genhtml_function_coverage=1 00:41:44.740 --rc genhtml_legend=1 00:41:44.740 --rc geninfo_all_blocks=1 00:41:44.740 --rc geninfo_unexecuted_blocks=1 00:41:44.740 00:41:44.740 ' 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:44.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:44.740 --rc genhtml_branch_coverage=1 00:41:44.740 --rc genhtml_function_coverage=1 00:41:44.740 --rc genhtml_legend=1 00:41:44.740 --rc geninfo_all_blocks=1 00:41:44.740 --rc geninfo_unexecuted_blocks=1 00:41:44.740 00:41:44.740 ' 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:44.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:44.740 --rc genhtml_branch_coverage=1 00:41:44.740 --rc genhtml_function_coverage=1 00:41:44.740 --rc genhtml_legend=1 00:41:44.740 --rc geninfo_all_blocks=1 00:41:44.740 --rc geninfo_unexecuted_blocks=1 00:41:44.740 00:41:44.740 ' 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:44.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:44.740 --rc genhtml_branch_coverage=1 00:41:44.740 --rc genhtml_function_coverage=1 00:41:44.740 --rc genhtml_legend=1 00:41:44.740 --rc geninfo_all_blocks=1 00:41:44.740 --rc geninfo_unexecuted_blocks=1 00:41:44.740 00:41:44.740 ' 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:41:44.740 03:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:46.642 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:46.642 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:46.642 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:46.642 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:46.642 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:46.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:46.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:41:46.643 00:41:46.643 --- 10.0.0.2 ping statistics --- 00:41:46.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:46.643 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:46.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:46.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:41:46.643 00:41:46.643 --- 10.0.0.1 ping statistics --- 00:41:46.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:46.643 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3189350 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3189350 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3189350 ']' 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:46.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:46.643 03:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:46.643 [2024-11-17 03:01:54.933715] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:46.643 [2024-11-17 03:01:54.936366] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:41:46.643 [2024-11-17 03:01:54.936477] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:46.643 [2024-11-17 03:01:55.089719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:46.902 [2024-11-17 03:01:55.236082] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:46.902 [2024-11-17 03:01:55.236187] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:46.902 [2024-11-17 03:01:55.236216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:46.902 [2024-11-17 03:01:55.236237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:46.902 [2024-11-17 03:01:55.236260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:46.902 [2024-11-17 03:01:55.239152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:46.902 [2024-11-17 03:01:55.239192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:46.902 [2024-11-17 03:01:55.239234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:46.902 [2024-11-17 03:01:55.239245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:47.161 [2024-11-17 03:01:55.616441] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:47.419 [2024-11-17 03:01:55.636403] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:47.419 [2024-11-17 03:01:55.636557] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:47.419 [2024-11-17 03:01:55.637355] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:47.419 [2024-11-17 03:01:55.637663] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:47.677 03:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:47.677 03:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:41:47.677 03:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:47.677 03:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:47.677 03:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:47.677 03:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:47.677 03:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:47.935 [2024-11-17 03:01:56.168355] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:47.935 03:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:48.193 03:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:41:48.193 03:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:48.451 03:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:41:48.451 03:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:49.018 03:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:41:49.018 03:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:49.276 03:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:41:49.276 03:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:41:49.534 03:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:50.101 03:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:41:50.101 03:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:50.360 03:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:41:50.360 03:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:50.925 03:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:41:50.926 03:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:41:51.183 03:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:51.440 03:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:51.440 03:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:51.698 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:51.698 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:41:51.955 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:52.254 [2024-11-17 03:02:00.656655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:52.254 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:41:52.537 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:41:52.794 03:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:53.053 03:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:41:53.053 03:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:41:53.053 03:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:53.053 03:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:41:53.053 03:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:41:53.053 03:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:41:55.579 03:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:55.579 03:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:55.579 03:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:55.579 03:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:41:55.579 03:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:55.579 03:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:41:55.579 03:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:55.579 [global] 00:41:55.579 thread=1 00:41:55.579 invalidate=1 00:41:55.579 rw=write 00:41:55.579 time_based=1 00:41:55.579 runtime=1 00:41:55.579 ioengine=libaio 00:41:55.579 direct=1 00:41:55.579 bs=4096 00:41:55.579 iodepth=1 00:41:55.579 norandommap=0 00:41:55.579 numjobs=1 00:41:55.579 00:41:55.579 verify_dump=1 00:41:55.579 verify_backlog=512 00:41:55.579 verify_state_save=0 00:41:55.579 do_verify=1 00:41:55.579 verify=crc32c-intel 00:41:55.579 [job0] 00:41:55.579 filename=/dev/nvme0n1 00:41:55.579 [job1] 00:41:55.579 filename=/dev/nvme0n2 00:41:55.579 [job2] 00:41:55.579 filename=/dev/nvme0n3 00:41:55.579 [job3] 00:41:55.579 filename=/dev/nvme0n4 00:41:55.579 Could not set queue depth (nvme0n1) 00:41:55.579 Could not set queue depth (nvme0n2) 00:41:55.579 Could not set queue depth (nvme0n3) 00:41:55.579 Could not set queue depth (nvme0n4) 00:41:55.579 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:55.579 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:55.579 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:55.580 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:55.580 fio-3.35 00:41:55.580 Starting 4 threads 00:41:56.515 00:41:56.515 job0: (groupid=0, jobs=1): err= 0: pid=3190448: Sun Nov 17 03:02:04 2024 00:41:56.515 read: IOPS=20, BW=81.5KiB/s (83.4kB/s)(84.0KiB/1031msec) 00:41:56.515 slat (nsec): min=6635, max=34183, avg=30266.52, stdev=7849.08 00:41:56.515 clat (usec): min=40863, max=41134, avg=40965.73, stdev=51.66 00:41:56.515 lat (usec): min=40897, max=41141, avg=40996.00, stdev=45.86 00:41:56.515 clat percentiles (usec): 00:41:56.515 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:56.515 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:56.515 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:56.515 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:56.515 | 99.99th=[41157] 00:41:56.515 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:41:56.515 slat (nsec): min=6570, max=50826, avg=10844.30, stdev=6279.29 00:41:56.515 clat (usec): min=188, max=791, avg=311.65, stdev=66.90 00:41:56.515 lat (usec): min=196, max=806, avg=322.49, stdev=65.58 00:41:56.515 clat percentiles (usec): 00:41:56.515 | 1.00th=[ 202], 5.00th=[ 215], 10.00th=[ 223], 20.00th=[ 265], 00:41:56.515 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 314], 00:41:56.515 | 70.00th=[ 367], 80.00th=[ 379], 90.00th=[ 396], 95.00th=[ 408], 00:41:56.515 | 99.00th=[ 429], 99.50th=[ 453], 99.90th=[ 791], 99.95th=[ 791], 00:41:56.515 | 99.99th=[ 791] 00:41:56.515 bw ( KiB/s): min= 4096, max= 4096, per=31.73%, avg=4096.00, stdev= 0.00, samples=1 00:41:56.515 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:56.515 lat (usec) : 250=16.70%, 500=79.17%, 1000=0.19% 00:41:56.515 lat (msec) : 50=3.94% 00:41:56.515 cpu : usr=0.10%, sys=0.78%, ctx=534, majf=0, minf=1 00:41:56.515 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:56.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.515 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:56.515 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:56.515 job1: (groupid=0, jobs=1): err= 0: pid=3190464: Sun Nov 17 03:02:04 2024 00:41:56.515 read: IOPS=18, BW=75.2KiB/s (77.1kB/s)(76.0KiB/1010msec) 00:41:56.515 slat (nsec): min=15702, max=35513, avg=31596.84, stdev=7253.71 00:41:56.515 clat (usec): min=40879, max=42270, avg=41340.12, stdev=524.11 00:41:56.515 lat (usec): min=40914, max=42286, avg=41371.72, stdev=522.21 00:41:56.515 clat percentiles (usec): 00:41:56.515 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:41:56.515 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:56.515 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:41:56.515 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:56.515 | 99.99th=[42206] 00:41:56.515 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:41:56.515 slat (usec): min=6, max=37430, avg=143.84, stdev=1985.67 00:41:56.515 clat (usec): min=175, max=526, avg=288.46, stdev=93.51 00:41:56.515 lat (usec): min=192, max=37867, avg=432.31, stdev=1998.45 00:41:56.515 clat percentiles (usec): 00:41:56.515 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 196], 00:41:56.515 | 30.00th=[ 202], 40.00th=[ 212], 50.00th=[ 265], 60.00th=[ 318], 00:41:56.515 | 70.00th=[ 379], 80.00th=[ 388], 90.00th=[ 408], 95.00th=[ 437], 00:41:56.515 | 99.00th=[ 486], 99.50th=[ 494], 99.90th=[ 529], 99.95th=[ 529], 00:41:56.515 | 99.99th=[ 529] 00:41:56.515 bw ( KiB/s): min= 4096, max= 4096, per=31.73%, avg=4096.00, stdev= 0.00, samples=1 00:41:56.515 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:56.515 lat (usec) : 250=44.63%, 500=51.41%, 750=0.38% 00:41:56.515 lat (msec) : 50=3.58% 00:41:56.515 cpu : usr=0.50%, sys=0.99%, ctx=536, majf=0, minf=1 00:41:56.515 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:56.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.515 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:56.515 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:56.515 job2: (groupid=0, jobs=1): err= 0: pid=3190499: Sun Nov 17 03:02:04 2024 00:41:56.515 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:41:56.515 slat (nsec): min=4868, max=69079, avg=17810.34, stdev=6538.42 00:41:56.515 clat (usec): min=260, max=42054, avg=336.48, stdev=1065.87 00:41:56.515 lat (usec): min=278, max=42087, avg=354.29, stdev=1066.31 00:41:56.515 clat percentiles (usec): 00:41:56.515 | 1.00th=[ 273], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 285], 00:41:56.515 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 310], 00:41:56.515 | 70.00th=[ 314], 80.00th=[ 322], 90.00th=[ 355], 95.00th=[ 367], 00:41:56.515 | 99.00th=[ 494], 99.50th=[ 545], 99.90th=[ 750], 99.95th=[42206], 00:41:56.515 | 99.99th=[42206] 00:41:56.515 write: IOPS=1809, BW=7237KiB/s (7410kB/s)(7244KiB/1001msec); 0 zone resets 00:41:56.515 slat (nsec): min=6510, max=70111, avg=15510.74, stdev=7360.64 00:41:56.515 clat (usec): min=193, max=652, avg=225.85, stdev=35.81 00:41:56.515 lat (usec): min=205, max=660, avg=241.36, stdev=37.74 00:41:56.515 clat percentiles (usec): 00:41:56.515 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 210], 00:41:56.515 | 30.00th=[ 212], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:41:56.515 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 277], 00:41:56.515 | 99.00th=[ 379], 99.50th=[ 445], 99.90th=[ 644], 99.95th=[ 652], 00:41:56.515 | 99.99th=[ 652] 00:41:56.515 bw ( KiB/s): min= 8192, max= 8192, per=63.45%, avg=8192.00, stdev= 0.00, samples=1 00:41:56.515 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:56.515 lat (usec) : 250=49.36%, 500=50.04%, 750=0.57% 00:41:56.515 lat (msec) : 50=0.03% 00:41:56.515 cpu : usr=2.90%, sys=6.40%, ctx=3348, majf=0, minf=1 00:41:56.515 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:56.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.515 issued rwts: total=1536,1811,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:56.515 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:56.515 job3: (groupid=0, jobs=1): err= 0: pid=3190504: Sun Nov 17 03:02:04 2024 00:41:56.515 read: IOPS=20, BW=81.0KiB/s (82.9kB/s)(84.0KiB/1037msec) 00:41:56.515 slat (nsec): min=16050, max=49124, avg=33220.00, stdev=7191.70 00:41:56.515 clat (usec): min=40896, max=41289, avg=40965.96, stdev=77.86 00:41:56.515 lat (usec): min=40932, max=41305, avg=40999.18, stdev=73.51 00:41:56.515 clat percentiles (usec): 00:41:56.515 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:56.515 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:56.515 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:56.515 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:56.515 | 99.99th=[41157] 00:41:56.515 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:41:56.515 slat (nsec): min=6607, max=43186, avg=10989.42, stdev=6171.44 00:41:56.515 clat (usec): min=187, max=440, avg=323.01, stdev=53.82 00:41:56.515 lat (usec): min=199, max=448, avg=334.00, stdev=53.02 00:41:56.515 clat percentiles (usec): 00:41:56.515 | 1.00th=[ 223], 5.00th=[ 255], 10.00th=[ 265], 20.00th=[ 277], 00:41:56.515 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 347], 00:41:56.515 | 70.00th=[ 371], 80.00th=[ 383], 90.00th=[ 396], 95.00th=[ 404], 00:41:56.515 | 99.00th=[ 420], 99.50th=[ 433], 99.90th=[ 441], 99.95th=[ 441], 00:41:56.515 | 99.99th=[ 441] 00:41:56.515 bw ( KiB/s): min= 4096, max= 4096, per=31.73%, avg=4096.00, stdev= 0.00, samples=1 00:41:56.515 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:56.515 lat (usec) : 250=2.81%, 500=93.25% 00:41:56.515 lat (msec) : 50=3.94% 00:41:56.515 cpu : usr=0.29%, sys=0.58%, ctx=536, majf=0, minf=1 00:41:56.515 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:56.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.515 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:56.515 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:56.515 00:41:56.515 Run status group 0 (all jobs): 00:41:56.515 READ: bw=6160KiB/s (6308kB/s), 75.2KiB/s-6138KiB/s (77.1kB/s-6285kB/s), io=6388KiB (6541kB), run=1001-1037msec 00:41:56.515 WRITE: bw=12.6MiB/s (13.2MB/s), 1975KiB/s-7237KiB/s (2022kB/s-7410kB/s), io=13.1MiB (13.7MB), run=1001-1037msec 00:41:56.515 00:41:56.515 Disk stats (read/write): 00:41:56.515 nvme0n1: ios=44/512, merge=0/0, ticks=1628/159, in_queue=1787, util=97.29% 00:41:56.515 nvme0n2: ios=40/512, merge=0/0, ticks=919/133, in_queue=1052, util=98.78% 00:41:56.515 nvme0n3: ios=1332/1536, merge=0/0, ticks=801/333, in_queue=1134, util=97.69% 00:41:56.515 nvme0n4: ios=73/512, merge=0/0, ticks=1044/161, in_queue=1205, util=97.36% 00:41:56.515 03:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:41:56.515 [global] 00:41:56.515 thread=1 00:41:56.515 invalidate=1 00:41:56.515 rw=randwrite 00:41:56.515 time_based=1 00:41:56.516 runtime=1 00:41:56.516 ioengine=libaio 00:41:56.516 direct=1 00:41:56.516 bs=4096 00:41:56.516 iodepth=1 00:41:56.516 norandommap=0 00:41:56.516 numjobs=1 00:41:56.516 00:41:56.516 verify_dump=1 00:41:56.516 verify_backlog=512 00:41:56.516 verify_state_save=0 00:41:56.516 do_verify=1 00:41:56.516 verify=crc32c-intel 00:41:56.516 [job0] 00:41:56.516 filename=/dev/nvme0n1 00:41:56.516 [job1] 00:41:56.516 filename=/dev/nvme0n2 00:41:56.516 [job2] 00:41:56.516 filename=/dev/nvme0n3 00:41:56.516 [job3] 00:41:56.516 filename=/dev/nvme0n4 00:41:56.773 Could not set queue depth (nvme0n1) 00:41:56.773 Could not set queue depth (nvme0n2) 00:41:56.773 Could not set queue depth (nvme0n3) 00:41:56.773 Could not set queue depth (nvme0n4) 00:41:56.773 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:56.773 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:56.773 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:56.773 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:56.773 fio-3.35 00:41:56.773 Starting 4 threads 00:41:58.148 00:41:58.148 job0: (groupid=0, jobs=1): err= 0: pid=3190780: Sun Nov 17 03:02:06 2024 00:41:58.148 read: IOPS=853, BW=3415KiB/s (3497kB/s)(3480KiB/1019msec) 00:41:58.148 slat (nsec): min=4993, max=26284, avg=7802.94, stdev=3258.02 00:41:58.148 clat (usec): min=204, max=41008, avg=870.09, stdev=4733.30 00:41:58.148 lat (usec): min=209, max=41021, avg=877.89, stdev=4733.93 00:41:58.148 clat percentiles (usec): 00:41:58.148 | 1.00th=[ 221], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 262], 00:41:58.148 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 302], 00:41:58.148 | 70.00th=[ 318], 80.00th=[ 351], 90.00th=[ 416], 95.00th=[ 502], 00:41:58.148 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:58.148 | 99.99th=[41157] 00:41:58.148 write: IOPS=1004, BW=4020KiB/s (4116kB/s)(4096KiB/1019msec); 0 zone resets 00:41:58.148 slat (nsec): min=6034, max=36850, avg=8253.10, stdev=2324.26 00:41:58.148 clat (usec): min=160, max=481, avg=235.79, stdev=70.63 00:41:58.148 lat (usec): min=167, max=518, avg=244.04, stdev=71.28 00:41:58.148 clat percentiles (usec): 00:41:58.148 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:41:58.148 | 30.00th=[ 182], 40.00th=[ 192], 50.00th=[ 210], 60.00th=[ 229], 00:41:58.148 | 70.00th=[ 245], 80.00th=[ 306], 90.00th=[ 355], 95.00th=[ 388], 00:41:58.148 | 99.00th=[ 404], 99.50th=[ 420], 99.90th=[ 465], 99.95th=[ 482], 00:41:58.148 | 99.99th=[ 482] 00:41:58.148 bw ( KiB/s): min= 2936, max= 5256, per=25.98%, avg=4096.00, stdev=1640.49, samples=2 00:41:58.148 iops : min= 734, max= 1314, avg=1024.00, stdev=410.12, samples=2 00:41:58.148 lat (usec) : 250=42.56%, 500=55.12%, 750=1.69% 00:41:58.148 lat (msec) : 50=0.63% 00:41:58.148 cpu : usr=1.08%, sys=1.28%, ctx=1895, majf=0, minf=1 00:41:58.148 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:58.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.148 issued rwts: total=870,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.148 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:58.148 job1: (groupid=0, jobs=1): err= 0: pid=3190781: Sun Nov 17 03:02:06 2024 00:41:58.148 read: IOPS=1321, BW=5284KiB/s (5411kB/s)(5464KiB/1034msec) 00:41:58.148 slat (nsec): min=4378, max=35247, avg=7376.15, stdev=3170.46 00:41:58.148 clat (usec): min=202, max=41119, avg=461.69, stdev=2449.88 00:41:58.148 lat (usec): min=207, max=41132, avg=469.06, stdev=2450.08 00:41:58.148 clat percentiles (usec): 00:41:58.148 | 1.00th=[ 251], 5.00th=[ 255], 10.00th=[ 260], 20.00th=[ 269], 00:41:58.148 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:41:58.148 | 70.00th=[ 314], 80.00th=[ 363], 90.00th=[ 383], 95.00th=[ 478], 00:41:58.148 | 99.00th=[ 529], 99.50th=[ 537], 99.90th=[41157], 99.95th=[41157], 00:41:58.148 | 99.99th=[41157] 00:41:58.148 write: IOPS=1485, BW=5942KiB/s (6085kB/s)(6144KiB/1034msec); 0 zone resets 00:41:58.148 slat (nsec): min=5265, max=35116, avg=8213.64, stdev=2893.18 00:41:58.148 clat (usec): min=171, max=649, avg=242.83, stdev=69.69 00:41:58.148 lat (usec): min=178, max=657, avg=251.04, stdev=70.14 00:41:58.148 clat percentiles (usec): 00:41:58.148 | 1.00th=[ 176], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 184], 00:41:58.148 | 30.00th=[ 190], 40.00th=[ 198], 50.00th=[ 210], 60.00th=[ 237], 00:41:58.148 | 70.00th=[ 262], 80.00th=[ 334], 90.00th=[ 359], 95.00th=[ 371], 00:41:58.148 | 99.00th=[ 404], 99.50th=[ 424], 99.90th=[ 449], 99.95th=[ 652], 00:41:58.148 | 99.99th=[ 652] 00:41:58.148 bw ( KiB/s): min= 4096, max= 8192, per=38.96%, avg=6144.00, stdev=2896.31, samples=2 00:41:58.148 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:41:58.148 lat (usec) : 250=35.35%, 500=62.72%, 750=1.72%, 1000=0.03% 00:41:58.148 lat (msec) : 50=0.17% 00:41:58.148 cpu : usr=1.74%, sys=2.71%, ctx=2902, majf=0, minf=1 00:41:58.148 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:58.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.148 issued rwts: total=1366,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.148 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:58.148 job2: (groupid=0, jobs=1): err= 0: pid=3190782: Sun Nov 17 03:02:06 2024 00:41:58.148 read: IOPS=196, BW=787KiB/s (806kB/s)(788KiB/1001msec) 00:41:58.148 slat (nsec): min=4584, max=21114, avg=9223.97, stdev=3759.26 00:41:58.148 clat (usec): min=242, max=41945, avg=4239.99, stdev=12051.71 00:41:58.148 lat (usec): min=248, max=41960, avg=4249.21, stdev=12052.77 00:41:58.148 clat percentiles (usec): 00:41:58.148 | 1.00th=[ 243], 5.00th=[ 249], 10.00th=[ 251], 20.00th=[ 255], 00:41:58.148 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 293], 60.00th=[ 338], 00:41:58.148 | 70.00th=[ 375], 80.00th=[ 392], 90.00th=[ 611], 95.00th=[41157], 00:41:58.148 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:58.148 | 99.99th=[42206] 00:41:58.148 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:41:58.148 slat (nsec): min=7925, max=30979, avg=9736.97, stdev=3417.81 00:41:58.148 clat (usec): min=205, max=469, avg=305.03, stdev=62.28 00:41:58.148 lat (usec): min=214, max=478, avg=314.77, stdev=61.96 00:41:58.148 clat percentiles (usec): 00:41:58.148 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 227], 00:41:58.148 | 30.00th=[ 243], 40.00th=[ 297], 50.00th=[ 334], 60.00th=[ 347], 00:41:58.148 | 70.00th=[ 355], 80.00th=[ 359], 90.00th=[ 367], 95.00th=[ 379], 00:41:58.148 | 99.00th=[ 404], 99.50th=[ 420], 99.90th=[ 469], 99.95th=[ 469], 00:41:58.148 | 99.99th=[ 469] 00:41:58.148 bw ( KiB/s): min= 4096, max= 4096, per=25.98%, avg=4096.00, stdev= 0.00, samples=1 00:41:58.148 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:58.148 lat (usec) : 250=25.95%, 500=70.66%, 750=0.71% 00:41:58.148 lat (msec) : 50=2.68% 00:41:58.148 cpu : usr=0.20%, sys=1.00%, ctx=710, majf=0, minf=1 00:41:58.148 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:58.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.148 issued rwts: total=197,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.148 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:58.148 job3: (groupid=0, jobs=1): err= 0: pid=3190783: Sun Nov 17 03:02:06 2024 00:41:58.148 read: IOPS=508, BW=2033KiB/s (2082kB/s)(2112KiB/1039msec) 00:41:58.148 slat (nsec): min=5934, max=41108, avg=8964.79, stdev=4242.53 00:41:58.148 clat (usec): min=277, max=41451, avg=1462.39, stdev=6272.53 00:41:58.148 lat (usec): min=284, max=41463, avg=1471.35, stdev=6273.14 00:41:58.148 clat percentiles (usec): 00:41:58.148 | 1.00th=[ 285], 5.00th=[ 306], 10.00th=[ 338], 20.00th=[ 388], 00:41:58.148 | 30.00th=[ 416], 40.00th=[ 461], 50.00th=[ 478], 60.00th=[ 498], 00:41:58.148 | 70.00th=[ 515], 80.00th=[ 545], 90.00th=[ 603], 95.00th=[ 619], 00:41:58.148 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:41:58.148 | 99.99th=[41681] 00:41:58.148 write: IOPS=985, BW=3942KiB/s (4037kB/s)(4096KiB/1039msec); 0 zone resets 00:41:58.148 slat (nsec): min=7358, max=31446, avg=9158.50, stdev=2823.50 00:41:58.148 clat (usec): min=191, max=793, avg=242.21, stdev=41.96 00:41:58.148 lat (usec): min=199, max=802, avg=251.36, stdev=42.50 00:41:58.148 clat percentiles (usec): 00:41:58.148 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 217], 00:41:58.148 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 239], 00:41:58.148 | 70.00th=[ 247], 80.00th=[ 262], 90.00th=[ 281], 95.00th=[ 314], 00:41:58.148 | 99.00th=[ 396], 99.50th=[ 400], 99.90th=[ 424], 99.95th=[ 791], 00:41:58.148 | 99.99th=[ 791] 00:41:58.148 bw ( KiB/s): min= 4096, max= 4096, per=25.98%, avg=4096.00, stdev= 0.00, samples=2 00:41:58.148 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:41:58.148 lat (usec) : 250=47.49%, 500=38.92%, 750=12.63%, 1000=0.13% 00:41:58.148 lat (msec) : 50=0.84% 00:41:58.148 cpu : usr=0.87%, sys=1.93%, ctx=1553, majf=0, minf=1 00:41:58.148 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:58.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.148 issued rwts: total=528,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.148 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:58.148 00:41:58.148 Run status group 0 (all jobs): 00:41:58.148 READ: bw=11.1MiB/s (11.7MB/s), 787KiB/s-5284KiB/s (806kB/s-5411kB/s), io=11.6MiB (12.1MB), run=1001-1039msec 00:41:58.148 WRITE: bw=15.4MiB/s (16.1MB/s), 2046KiB/s-5942KiB/s (2095kB/s-6085kB/s), io=16.0MiB (16.8MB), run=1001-1039msec 00:41:58.148 00:41:58.148 Disk stats (read/write): 00:41:58.148 nvme0n1: ios=916/1024, merge=0/0, ticks=619/237, in_queue=856, util=87.37% 00:41:58.148 nvme0n2: ios=1105/1536, merge=0/0, ticks=517/359, in_queue=876, util=91.37% 00:41:58.148 nvme0n3: ios=60/512, merge=0/0, ticks=1612/154, in_queue=1766, util=97.92% 00:41:58.148 nvme0n4: ios=572/1024, merge=0/0, ticks=1304/245, in_queue=1549, util=97.70% 00:41:58.148 03:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:41:58.148 [global] 00:41:58.148 thread=1 00:41:58.148 invalidate=1 00:41:58.148 rw=write 00:41:58.148 time_based=1 00:41:58.148 runtime=1 00:41:58.148 ioengine=libaio 00:41:58.148 direct=1 00:41:58.148 bs=4096 00:41:58.148 iodepth=128 00:41:58.148 norandommap=0 00:41:58.148 numjobs=1 00:41:58.148 00:41:58.148 verify_dump=1 00:41:58.148 verify_backlog=512 00:41:58.148 verify_state_save=0 00:41:58.148 do_verify=1 00:41:58.148 verify=crc32c-intel 00:41:58.148 [job0] 00:41:58.148 filename=/dev/nvme0n1 00:41:58.148 [job1] 00:41:58.148 filename=/dev/nvme0n2 00:41:58.148 [job2] 00:41:58.148 filename=/dev/nvme0n3 00:41:58.148 [job3] 00:41:58.148 filename=/dev/nvme0n4 00:41:58.148 Could not set queue depth (nvme0n1) 00:41:58.148 Could not set queue depth (nvme0n2) 00:41:58.148 Could not set queue depth (nvme0n3) 00:41:58.148 Could not set queue depth (nvme0n4) 00:41:58.407 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:58.407 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:58.407 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:58.407 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:58.407 fio-3.35 00:41:58.407 Starting 4 threads 00:41:59.802 00:41:59.802 job0: (groupid=0, jobs=1): err= 0: pid=3191005: Sun Nov 17 03:02:07 2024 00:41:59.802 read: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec) 00:41:59.802 slat (usec): min=2, max=12900, avg=143.13, stdev=1019.90 00:41:59.802 clat (usec): min=5730, max=39149, avg=17534.84, stdev=6842.25 00:41:59.802 lat (usec): min=5738, max=39153, avg=17677.97, stdev=6897.51 00:41:59.802 clat percentiles (usec): 00:41:59.802 | 1.00th=[ 6587], 5.00th=[10421], 10.00th=[11863], 20.00th=[13566], 00:41:59.802 | 30.00th=[13829], 40.00th=[13829], 50.00th=[14222], 60.00th=[15664], 00:41:59.802 | 70.00th=[18744], 80.00th=[22938], 90.00th=[27395], 95.00th=[33424], 00:41:59.802 | 99.00th=[38536], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:41:59.802 | 99.99th=[39060] 00:41:59.802 write: IOPS=3485, BW=13.6MiB/s (14.3MB/s)(13.8MiB/1014msec); 0 zone resets 00:41:59.802 slat (usec): min=2, max=12944, avg=151.58, stdev=712.19 00:41:59.802 clat (usec): min=1613, max=51869, avg=20805.75, stdev=9039.90 00:41:59.802 lat (usec): min=1619, max=51877, avg=20957.33, stdev=9085.96 00:41:59.802 clat percentiles (usec): 00:41:59.802 | 1.00th=[ 4490], 5.00th=[ 7832], 10.00th=[10290], 20.00th=[13435], 00:41:59.802 | 30.00th=[14484], 40.00th=[16188], 50.00th=[19006], 60.00th=[23987], 00:41:59.802 | 70.00th=[26084], 80.00th=[28443], 90.00th=[33817], 95.00th=[37487], 00:41:59.802 | 99.00th=[41681], 99.50th=[46924], 99.90th=[51643], 99.95th=[51643], 00:41:59.802 | 99.99th=[51643] 00:41:59.802 bw ( KiB/s): min=10960, max=16328, per=25.88%, avg=13644.00, stdev=3795.75, samples=2 00:41:59.802 iops : min= 2740, max= 4082, avg=3411.00, stdev=948.94, samples=2 00:41:59.802 lat (msec) : 2=0.17%, 4=0.24%, 10=6.65%, 20=54.18%, 50=38.56% 00:41:59.802 lat (msec) : 100=0.21% 00:41:59.802 cpu : usr=2.47%, sys=4.84%, ctx=377, majf=0, minf=1 00:41:59.802 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:41:59.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:59.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:59.802 issued rwts: total=3072,3534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:59.802 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:59.802 job1: (groupid=0, jobs=1): err= 0: pid=3191007: Sun Nov 17 03:02:07 2024 00:41:59.802 read: IOPS=1870, BW=7483KiB/s (7662kB/s)(7520KiB/1005msec) 00:41:59.802 slat (usec): min=3, max=20324, avg=186.06, stdev=1059.57 00:41:59.802 clat (usec): min=2130, max=52371, avg=21782.95, stdev=8891.36 00:41:59.802 lat (usec): min=6713, max=52432, avg=21969.01, stdev=8915.23 00:41:59.802 clat percentiles (usec): 00:41:59.802 | 1.00th=[ 6849], 5.00th=[13960], 10.00th=[14877], 20.00th=[15795], 00:41:59.802 | 30.00th=[18744], 40.00th=[19006], 50.00th=[19530], 60.00th=[19530], 00:41:59.802 | 70.00th=[21365], 80.00th=[23462], 90.00th=[31589], 95.00th=[46400], 00:41:59.802 | 99.00th=[52167], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:41:59.802 | 99.99th=[52167] 00:41:59.802 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:41:59.802 slat (usec): min=5, max=56180, avg=308.41, stdev=2305.26 00:41:59.802 clat (msec): min=11, max=219, avg=33.13, stdev=22.22 00:41:59.802 lat (msec): min=12, max=219, avg=33.44, stdev=22.60 00:41:59.802 clat percentiles (msec): 00:41:59.802 | 1.00th=[ 13], 5.00th=[ 16], 10.00th=[ 16], 20.00th=[ 21], 00:41:59.802 | 30.00th=[ 25], 40.00th=[ 26], 50.00th=[ 27], 60.00th=[ 28], 00:41:59.802 | 70.00th=[ 30], 80.00th=[ 37], 90.00th=[ 66], 95.00th=[ 70], 00:41:59.802 | 99.00th=[ 122], 99.50th=[ 153], 99.90th=[ 220], 99.95th=[ 220], 00:41:59.803 | 99.99th=[ 220] 00:41:59.803 bw ( KiB/s): min= 8192, max= 8192, per=15.54%, avg=8192.00, stdev= 0.00, samples=2 00:41:59.803 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:41:59.803 lat (msec) : 4=0.03%, 10=0.81%, 20=37.50%, 50=52.62%, 100=8.22% 00:41:59.803 lat (msec) : 250=0.81% 00:41:59.803 cpu : usr=2.79%, sys=3.98%, ctx=253, majf=0, minf=1 00:41:59.803 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:41:59.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:59.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:59.803 issued rwts: total=1880,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:59.803 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:59.803 job2: (groupid=0, jobs=1): err= 0: pid=3191013: Sun Nov 17 03:02:07 2024 00:41:59.803 read: IOPS=3656, BW=14.3MiB/s (15.0MB/s)(14.9MiB/1045msec) 00:41:59.803 slat (usec): min=3, max=13478, avg=119.17, stdev=841.41 00:41:59.803 clat (usec): min=621, max=62015, avg=16439.92, stdev=7703.31 00:41:59.803 lat (usec): min=655, max=62020, avg=16559.09, stdev=7736.95 00:41:59.803 clat percentiles (usec): 00:41:59.803 | 1.00th=[ 2638], 5.00th=[10814], 10.00th=[12125], 20.00th=[13042], 00:41:59.803 | 30.00th=[13304], 40.00th=[13829], 50.00th=[14615], 60.00th=[15139], 00:41:59.803 | 70.00th=[16909], 80.00th=[18744], 90.00th=[20841], 95.00th=[25560], 00:41:59.803 | 99.00th=[54789], 99.50th=[55313], 99.90th=[62129], 99.95th=[62129], 00:41:59.803 | 99.99th=[62129] 00:41:59.803 write: IOPS=3919, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1045msec); 0 zone resets 00:41:59.803 slat (usec): min=3, max=41075, avg=119.55, stdev=927.21 00:41:59.803 clat (usec): min=1135, max=42215, avg=15044.52, stdev=2574.64 00:41:59.803 lat (usec): min=1163, max=64534, avg=15164.07, stdev=2708.82 00:41:59.803 clat percentiles (usec): 00:41:59.803 | 1.00th=[ 7898], 5.00th=[10028], 10.00th=[12125], 20.00th=[13960], 00:41:59.803 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15270], 60.00th=[15401], 00:41:59.803 | 70.00th=[15664], 80.00th=[15926], 90.00th=[16909], 95.00th=[19792], 00:41:59.803 | 99.00th=[21890], 99.50th=[23462], 99.90th=[25297], 99.95th=[28443], 00:41:59.803 | 99.99th=[42206] 00:41:59.803 bw ( KiB/s): min=16384, max=16384, per=31.08%, avg=16384.00, stdev= 0.00, samples=2 00:41:59.803 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:41:59.803 lat (usec) : 750=0.01% 00:41:59.803 lat (msec) : 2=0.03%, 4=0.96%, 10=3.02%, 20=87.31%, 50=7.78% 00:41:59.803 lat (msec) : 100=0.90% 00:41:59.803 cpu : usr=4.79%, sys=8.52%, ctx=439, majf=0, minf=1 00:41:59.803 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:41:59.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:59.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:59.803 issued rwts: total=3821,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:59.803 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:59.803 job3: (groupid=0, jobs=1): err= 0: pid=3191014: Sun Nov 17 03:02:07 2024 00:41:59.803 read: IOPS=3910, BW=15.3MiB/s (16.0MB/s)(15.5MiB/1014msec) 00:41:59.803 slat (usec): min=2, max=16106, avg=125.05, stdev=1015.20 00:41:59.803 clat (usec): min=4795, max=47265, avg=16498.10, stdev=4211.72 00:41:59.803 lat (usec): min=4815, max=47274, avg=16623.15, stdev=4288.54 00:41:59.803 clat percentiles (usec): 00:41:59.803 | 1.00th=[ 7898], 5.00th=[10421], 10.00th=[12256], 20.00th=[13698], 00:41:59.803 | 30.00th=[14484], 40.00th=[15008], 50.00th=[15926], 60.00th=[17171], 00:41:59.803 | 70.00th=[17957], 80.00th=[18744], 90.00th=[20841], 95.00th=[26084], 00:41:59.803 | 99.00th=[29754], 99.50th=[32375], 99.90th=[33162], 99.95th=[33162], 00:41:59.803 | 99.99th=[47449] 00:41:59.803 write: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1014msec); 0 zone resets 00:41:59.803 slat (usec): min=3, max=16331, avg=109.91, stdev=923.47 00:41:59.803 clat (usec): min=334, max=31986, avg=15429.69, stdev=4279.64 00:41:59.803 lat (usec): min=919, max=32004, avg=15539.60, stdev=4332.63 00:41:59.803 clat percentiles (usec): 00:41:59.803 | 1.00th=[ 6390], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[11731], 00:41:59.803 | 30.00th=[13829], 40.00th=[14746], 50.00th=[15533], 60.00th=[15926], 00:41:59.803 | 70.00th=[16581], 80.00th=[17957], 90.00th=[20055], 95.00th=[23200], 00:41:59.803 | 99.00th=[28181], 99.50th=[28967], 99.90th=[30802], 99.95th=[31327], 00:41:59.803 | 99.99th=[32113] 00:41:59.803 bw ( KiB/s): min=16384, max=16384, per=31.08%, avg=16384.00, stdev= 0.00, samples=2 00:41:59.803 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:41:59.803 lat (usec) : 500=0.01%, 1000=0.02% 00:41:59.803 lat (msec) : 2=0.12%, 4=0.20%, 10=7.20%, 20=79.95%, 50=12.49% 00:41:59.803 cpu : usr=2.96%, sys=7.50%, ctx=250, majf=0, minf=1 00:41:59.803 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:41:59.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:59.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:59.803 issued rwts: total=3965,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:59.803 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:59.803 00:41:59.803 Run status group 0 (all jobs): 00:41:59.803 READ: bw=47.6MiB/s (49.9MB/s), 7483KiB/s-15.3MiB/s (7662kB/s-16.0MB/s), io=49.8MiB (52.2MB), run=1005-1045msec 00:41:59.803 WRITE: bw=51.5MiB/s (54.0MB/s), 8151KiB/s-15.8MiB/s (8347kB/s-16.5MB/s), io=53.8MiB (56.4MB), run=1005-1045msec 00:41:59.803 00:41:59.803 Disk stats (read/write): 00:41:59.803 nvme0n1: ios=2610/3055, merge=0/0, ticks=37290/50521, in_queue=87811, util=90.88% 00:41:59.803 nvme0n2: ios=1570/1791, merge=0/0, ticks=9942/10014, in_queue=19956, util=97.06% 00:41:59.803 nvme0n3: ios=3132/3567, merge=0/0, ticks=33937/35492, in_queue=69429, util=95.52% 00:41:59.803 nvme0n4: ios=3129/3584, merge=0/0, ticks=47767/53548, in_queue=101315, util=96.01% 00:41:59.803 03:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:41:59.803 [global] 00:41:59.803 thread=1 00:41:59.803 invalidate=1 00:41:59.803 rw=randwrite 00:41:59.803 time_based=1 00:41:59.803 runtime=1 00:41:59.803 ioengine=libaio 00:41:59.803 direct=1 00:41:59.803 bs=4096 00:41:59.803 iodepth=128 00:41:59.803 norandommap=0 00:41:59.803 numjobs=1 00:41:59.803 00:41:59.803 verify_dump=1 00:41:59.803 verify_backlog=512 00:41:59.803 verify_state_save=0 00:41:59.803 do_verify=1 00:41:59.803 verify=crc32c-intel 00:41:59.803 [job0] 00:41:59.803 filename=/dev/nvme0n1 00:41:59.803 [job1] 00:41:59.803 filename=/dev/nvme0n2 00:41:59.803 [job2] 00:41:59.803 filename=/dev/nvme0n3 00:41:59.803 [job3] 00:41:59.803 filename=/dev/nvme0n4 00:41:59.803 Could not set queue depth (nvme0n1) 00:41:59.803 Could not set queue depth (nvme0n2) 00:41:59.803 Could not set queue depth (nvme0n3) 00:41:59.803 Could not set queue depth (nvme0n4) 00:41:59.803 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:59.803 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:59.803 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:59.803 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:59.803 fio-3.35 00:41:59.803 Starting 4 threads 00:42:01.179 00:42:01.179 job0: (groupid=0, jobs=1): err= 0: pid=3191237: Sun Nov 17 03:02:09 2024 00:42:01.179 read: IOPS=4473, BW=17.5MiB/s (18.3MB/s)(17.8MiB/1018msec) 00:42:01.179 slat (usec): min=2, max=10825, avg=106.58, stdev=659.15 00:42:01.179 clat (usec): min=7502, max=53489, avg=14334.50, stdev=4384.57 00:42:01.179 lat (usec): min=7511, max=60213, avg=14441.08, stdev=4418.20 00:42:01.179 clat percentiles (usec): 00:42:01.179 | 1.00th=[ 8717], 5.00th=[10028], 10.00th=[10814], 20.00th=[11863], 00:42:01.179 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12911], 60.00th=[13566], 00:42:01.179 | 70.00th=[14877], 80.00th=[16319], 90.00th=[19530], 95.00th=[21627], 00:42:01.179 | 99.00th=[28443], 99.50th=[28705], 99.90th=[53216], 99.95th=[53216], 00:42:01.179 | 99.99th=[53740] 00:42:01.179 write: IOPS=5029, BW=19.6MiB/s (20.6MB/s)(20.0MiB/1018msec); 0 zone resets 00:42:01.179 slat (usec): min=3, max=7491, avg=92.88, stdev=498.44 00:42:01.179 clat (usec): min=6886, max=34862, avg=12409.19, stdev=2369.31 00:42:01.179 lat (usec): min=6892, max=34867, avg=12502.08, stdev=2402.49 00:42:01.179 clat percentiles (usec): 00:42:01.179 | 1.00th=[ 7046], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[11469], 00:42:01.179 | 30.00th=[11731], 40.00th=[11863], 50.00th=[12125], 60.00th=[12518], 00:42:01.179 | 70.00th=[12911], 80.00th=[13566], 90.00th=[14615], 95.00th=[16057], 00:42:01.179 | 99.00th=[19530], 99.50th=[24249], 99.90th=[31065], 99.95th=[31065], 00:42:01.179 | 99.99th=[34866] 00:42:01.179 bw ( KiB/s): min=20480, max=20480, per=44.27%, avg=20480.00, stdev= 0.00, samples=2 00:42:01.179 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:42:01.179 lat (msec) : 10=7.40%, 20=87.72%, 50=4.70%, 100=0.18% 00:42:01.179 cpu : usr=4.92%, sys=7.77%, ctx=434, majf=0, minf=2 00:42:01.179 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:42:01.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:01.179 issued rwts: total=4554,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.179 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:01.179 job1: (groupid=0, jobs=1): err= 0: pid=3191238: Sun Nov 17 03:02:09 2024 00:42:01.179 read: IOPS=1941, BW=7766KiB/s (7953kB/s)(7844KiB/1010msec) 00:42:01.179 slat (usec): min=3, max=18168, avg=173.28, stdev=1248.85 00:42:01.179 clat (usec): min=4793, max=49667, avg=21741.64, stdev=8147.68 00:42:01.179 lat (usec): min=4808, max=49673, avg=21914.92, stdev=8210.18 00:42:01.179 clat percentiles (usec): 00:42:01.179 | 1.00th=[ 4817], 5.00th=[13435], 10.00th=[14222], 20.00th=[15139], 00:42:01.179 | 30.00th=[16909], 40.00th=[17695], 50.00th=[19530], 60.00th=[21627], 00:42:01.179 | 70.00th=[24249], 80.00th=[27919], 90.00th=[35914], 95.00th=[39584], 00:42:01.179 | 99.00th=[44827], 99.50th=[46924], 99.90th=[49546], 99.95th=[49546], 00:42:01.179 | 99.99th=[49546] 00:42:01.179 write: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec); 0 zone resets 00:42:01.179 slat (usec): min=4, max=22876, avg=308.52, stdev=1557.83 00:42:01.179 clat (msec): min=5, max=110, avg=41.68, stdev=25.87 00:42:01.179 lat (msec): min=5, max=110, avg=41.99, stdev=26.04 00:42:01.179 clat percentiles (msec): 00:42:01.179 | 1.00th=[ 14], 5.00th=[ 17], 10.00th=[ 18], 20.00th=[ 19], 00:42:01.179 | 30.00th=[ 22], 40.00th=[ 25], 50.00th=[ 32], 60.00th=[ 40], 00:42:01.179 | 70.00th=[ 57], 80.00th=[ 65], 90.00th=[ 83], 95.00th=[ 94], 00:42:01.179 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 111], 99.95th=[ 111], 00:42:01.179 | 99.99th=[ 111] 00:42:01.179 bw ( KiB/s): min= 8175, max= 8192, per=17.69%, avg=8183.50, stdev=12.02, samples=2 00:42:01.179 iops : min= 2043, max= 2048, avg=2045.50, stdev= 3.54, samples=2 00:42:01.179 lat (msec) : 10=0.90%, 20=36.32%, 50=44.95%, 100=16.39%, 250=1.45% 00:42:01.179 cpu : usr=2.68%, sys=4.26%, ctx=187, majf=0, minf=1 00:42:01.179 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:42:01.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:01.179 issued rwts: total=1961,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.179 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:01.179 job2: (groupid=0, jobs=1): err= 0: pid=3191239: Sun Nov 17 03:02:09 2024 00:42:01.179 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:42:01.179 slat (usec): min=2, max=25983, avg=152.71, stdev=1052.70 00:42:01.179 clat (usec): min=5190, max=79042, avg=21316.71, stdev=10318.76 00:42:01.179 lat (usec): min=5194, max=79051, avg=21469.43, stdev=10391.70 00:42:01.179 clat percentiles (usec): 00:42:01.179 | 1.00th=[ 7767], 5.00th=[10159], 10.00th=[11863], 20.00th=[14222], 00:42:01.179 | 30.00th=[15533], 40.00th=[17433], 50.00th=[19268], 60.00th=[19530], 00:42:01.179 | 70.00th=[23200], 80.00th=[28443], 90.00th=[29230], 95.00th=[46400], 00:42:01.179 | 99.00th=[64750], 99.50th=[66847], 99.90th=[66847], 99.95th=[66847], 00:42:01.179 | 99.99th=[79168] 00:42:01.179 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:42:01.179 slat (usec): min=3, max=25035, avg=172.99, stdev=1506.17 00:42:01.179 clat (usec): min=774, max=69285, avg=23398.59, stdev=14272.97 00:42:01.179 lat (usec): min=1246, max=69303, avg=23571.57, stdev=14407.35 00:42:01.179 clat percentiles (usec): 00:42:01.179 | 1.00th=[ 6259], 5.00th=[ 8029], 10.00th=[ 8356], 20.00th=[12649], 00:42:01.179 | 30.00th=[13829], 40.00th=[15401], 50.00th=[17695], 60.00th=[21890], 00:42:01.179 | 70.00th=[29492], 80.00th=[37487], 90.00th=[44303], 95.00th=[50594], 00:42:01.179 | 99.00th=[62653], 99.50th=[65274], 99.90th=[66323], 99.95th=[68682], 00:42:01.179 | 99.99th=[69731] 00:42:01.179 bw ( KiB/s): min=10576, max=12952, per=25.43%, avg=11764.00, stdev=1680.09, samples=2 00:42:01.179 iops : min= 2644, max= 3238, avg=2941.00, stdev=420.02, samples=2 00:42:01.179 lat (usec) : 1000=0.02% 00:42:01.179 lat (msec) : 2=0.07%, 4=0.12%, 10=10.53%, 20=47.36%, 50=37.11% 00:42:01.179 lat (msec) : 100=4.78% 00:42:01.179 cpu : usr=2.18%, sys=3.28%, ctx=166, majf=0, minf=2 00:42:01.179 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:42:01.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:01.179 issued rwts: total=2560,3069,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.179 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:01.179 job3: (groupid=0, jobs=1): err= 0: pid=3191240: Sun Nov 17 03:02:09 2024 00:42:01.179 read: IOPS=1260, BW=5043KiB/s (5164kB/s)(5088KiB/1009msec) 00:42:01.179 slat (usec): min=2, max=27580, avg=314.63, stdev=1930.87 00:42:01.179 clat (usec): min=2081, max=72182, avg=35017.97, stdev=10742.48 00:42:01.179 lat (usec): min=17495, max=72187, avg=35332.60, stdev=10824.21 00:42:01.179 clat percentiles (usec): 00:42:01.179 | 1.00th=[18744], 5.00th=[19530], 10.00th=[24511], 20.00th=[28705], 00:42:01.180 | 30.00th=[28705], 40.00th=[28967], 50.00th=[32113], 60.00th=[35390], 00:42:01.180 | 70.00th=[36963], 80.00th=[44303], 90.00th=[49546], 95.00th=[62129], 00:42:01.180 | 99.00th=[64750], 99.50th=[64750], 99.90th=[71828], 99.95th=[71828], 00:42:01.180 | 99.99th=[71828] 00:42:01.180 write: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec); 0 zone resets 00:42:01.180 slat (usec): min=3, max=31347, avg=386.90, stdev=2054.56 00:42:01.180 clat (msec): min=16, max=118, avg=53.21, stdev=19.63 00:42:01.180 lat (msec): min=16, max=118, avg=53.60, stdev=19.76 00:42:01.180 clat percentiles (msec): 00:42:01.180 | 1.00th=[ 17], 5.00th=[ 22], 10.00th=[ 26], 20.00th=[ 40], 00:42:01.180 | 30.00th=[ 45], 40.00th=[ 47], 50.00th=[ 52], 60.00th=[ 55], 00:42:01.180 | 70.00th=[ 61], 80.00th=[ 68], 90.00th=[ 79], 95.00th=[ 92], 00:42:01.180 | 99.00th=[ 107], 99.50th=[ 109], 99.90th=[ 120], 99.95th=[ 120], 00:42:01.180 | 99.99th=[ 120] 00:42:01.180 bw ( KiB/s): min= 5320, max= 6954, per=13.27%, avg=6137.00, stdev=1155.41, samples=2 00:42:01.180 iops : min= 1330, max= 1738, avg=1534.00, stdev=288.50, samples=2 00:42:01.180 lat (msec) : 4=0.04%, 20=4.88%, 50=60.11%, 100=33.33%, 250=1.64% 00:42:01.180 cpu : usr=0.89%, sys=2.58%, ctx=173, majf=0, minf=1 00:42:01.180 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:42:01.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:01.180 issued rwts: total=1272,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.180 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:01.180 00:42:01.180 Run status group 0 (all jobs): 00:42:01.180 READ: bw=39.7MiB/s (41.6MB/s), 5043KiB/s-17.5MiB/s (5164kB/s-18.3MB/s), io=40.4MiB (42.4MB), run=1008-1018msec 00:42:01.180 WRITE: bw=45.2MiB/s (47.4MB/s), 6089KiB/s-19.6MiB/s (6235kB/s-20.6MB/s), io=46.0MiB (48.2MB), run=1008-1018msec 00:42:01.180 00:42:01.180 Disk stats (read/write): 00:42:01.180 nvme0n1: ios=4095/4096, merge=0/0, ticks=22810/19399, in_queue=42209, util=96.79% 00:42:01.180 nvme0n2: ios=1564/1650, merge=0/0, ticks=30705/75566, in_queue=106271, util=96.75% 00:42:01.180 nvme0n3: ios=2462/2560, merge=0/0, ticks=29590/34840, in_queue=64430, util=91.04% 00:42:01.180 nvme0n4: ios=1048/1327, merge=0/0, ticks=17807/32441, in_queue=50248, util=96.54% 00:42:01.180 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:42:01.180 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3191372 00:42:01.180 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:42:01.180 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:42:01.180 [global] 00:42:01.180 thread=1 00:42:01.180 invalidate=1 00:42:01.180 rw=read 00:42:01.180 time_based=1 00:42:01.180 runtime=10 00:42:01.180 ioengine=libaio 00:42:01.180 direct=1 00:42:01.180 bs=4096 00:42:01.180 iodepth=1 00:42:01.180 norandommap=1 00:42:01.180 numjobs=1 00:42:01.180 00:42:01.180 [job0] 00:42:01.180 filename=/dev/nvme0n1 00:42:01.180 [job1] 00:42:01.180 filename=/dev/nvme0n2 00:42:01.180 [job2] 00:42:01.180 filename=/dev/nvme0n3 00:42:01.180 [job3] 00:42:01.180 filename=/dev/nvme0n4 00:42:01.180 Could not set queue depth (nvme0n1) 00:42:01.180 Could not set queue depth (nvme0n2) 00:42:01.180 Could not set queue depth (nvme0n3) 00:42:01.180 Could not set queue depth (nvme0n4) 00:42:01.180 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:01.180 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:01.180 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:01.180 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:01.180 fio-3.35 00:42:01.180 Starting 4 threads 00:42:04.463 03:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:42:04.463 03:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:42:04.463 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=5292032, buflen=4096 00:42:04.463 fio: pid=3191587, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:04.721 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=7507968, buflen=4096 00:42:04.721 fio: pid=3191586, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:04.721 03:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:04.721 03:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:42:04.979 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11014144, buflen=4096 00:42:04.979 fio: pid=3191584, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:04.979 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:04.979 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:42:05.244 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=48472064, buflen=4096 00:42:05.244 fio: pid=3191585, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:05.244 00:42:05.244 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3191584: Sun Nov 17 03:02:13 2024 00:42:05.244 read: IOPS=759, BW=3035KiB/s (3108kB/s)(10.5MiB/3544msec) 00:42:05.244 slat (usec): min=4, max=29721, avg=24.98, stdev=624.50 00:42:05.244 clat (usec): min=249, max=48529, avg=1281.85, stdev=6175.58 00:42:05.244 lat (usec): min=255, max=48542, avg=1306.84, stdev=6206.54 00:42:05.244 clat percentiles (usec): 00:42:05.244 | 1.00th=[ 255], 5.00th=[ 262], 10.00th=[ 265], 20.00th=[ 273], 00:42:05.244 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 310], 00:42:05.244 | 70.00th=[ 322], 80.00th=[ 338], 90.00th=[ 396], 95.00th=[ 478], 00:42:05.244 | 99.00th=[40633], 99.50th=[40633], 99.90th=[41681], 99.95th=[41681], 00:42:05.244 | 99.99th=[48497] 00:42:05.244 bw ( KiB/s): min= 192, max=11784, per=19.34%, avg=3549.33, stdev=5268.63, samples=6 00:42:05.244 iops : min= 48, max= 2946, avg=887.33, stdev=1317.16, samples=6 00:42:05.244 lat (usec) : 250=0.04%, 500=96.10%, 750=1.26% 00:42:05.244 lat (msec) : 2=0.11%, 10=0.04%, 20=0.04%, 50=2.38% 00:42:05.244 cpu : usr=0.34%, sys=1.02%, ctx=2694, majf=0, minf=1 00:42:05.244 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:05.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.244 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.244 issued rwts: total=2690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:05.244 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:05.244 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3191585: Sun Nov 17 03:02:13 2024 00:42:05.244 read: IOPS=3077, BW=12.0MiB/s (12.6MB/s)(46.2MiB/3846msec) 00:42:05.244 slat (usec): min=4, max=14708, avg=13.27, stdev=233.42 00:42:05.244 clat (usec): min=202, max=56711, avg=307.31, stdev=523.67 00:42:05.244 lat (usec): min=207, max=56723, avg=320.58, stdev=574.09 00:42:05.244 clat percentiles (usec): 00:42:05.244 | 1.00th=[ 241], 5.00th=[ 249], 10.00th=[ 258], 20.00th=[ 269], 00:42:05.244 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:42:05.244 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 334], 95.00th=[ 424], 00:42:05.244 | 99.00th=[ 594], 99.50th=[ 644], 99.90th=[ 1020], 99.95th=[ 1172], 00:42:05.244 | 99.99th=[ 3425] 00:42:05.244 bw ( KiB/s): min= 9584, max=14056, per=66.64%, avg=12232.43, stdev=1429.04, samples=7 00:42:05.244 iops : min= 2396, max= 3514, avg=3058.00, stdev=357.33, samples=7 00:42:05.244 lat (usec) : 250=5.62%, 500=91.47%, 750=2.72%, 1000=0.08% 00:42:05.244 lat (msec) : 2=0.08%, 4=0.01%, 100=0.01% 00:42:05.244 cpu : usr=2.34%, sys=3.85%, ctx=11841, majf=0, minf=2 00:42:05.244 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:05.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.244 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.244 issued rwts: total=11835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:05.244 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:05.244 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3191586: Sun Nov 17 03:02:13 2024 00:42:05.244 read: IOPS=572, BW=2291KiB/s (2346kB/s)(7332KiB/3201msec) 00:42:05.244 slat (usec): min=4, max=14630, avg=30.90, stdev=435.41 00:42:05.244 clat (usec): min=233, max=42218, avg=1705.99, stdev=7212.45 00:42:05.244 lat (usec): min=253, max=42248, avg=1736.90, stdev=7223.22 00:42:05.244 clat percentiles (usec): 00:42:05.244 | 1.00th=[ 273], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 322], 00:42:05.244 | 30.00th=[ 343], 40.00th=[ 379], 50.00th=[ 392], 60.00th=[ 429], 00:42:05.244 | 70.00th=[ 461], 80.00th=[ 490], 90.00th=[ 523], 95.00th=[ 578], 00:42:05.244 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:05.244 | 99.99th=[42206] 00:42:05.244 bw ( KiB/s): min= 96, max= 4912, per=9.83%, avg=1804.00, stdev=1887.03, samples=6 00:42:05.244 iops : min= 24, max= 1228, avg=451.00, stdev=471.76, samples=6 00:42:05.244 lat (usec) : 250=0.11%, 500=84.19%, 750=12.38%, 1000=0.11% 00:42:05.244 lat (msec) : 50=3.16% 00:42:05.244 cpu : usr=0.31%, sys=1.16%, ctx=1837, majf=0, minf=2 00:42:05.244 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:05.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.244 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.244 issued rwts: total=1834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:05.244 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:05.244 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3191587: Sun Nov 17 03:02:13 2024 00:42:05.244 read: IOPS=440, BW=1761KiB/s (1803kB/s)(5168KiB/2935msec) 00:42:05.244 slat (nsec): min=4764, max=56799, avg=18540.07, stdev=10393.45 00:42:05.244 clat (usec): min=269, max=42530, avg=2230.53, stdev=8343.20 00:42:05.244 lat (usec): min=283, max=42547, avg=2249.08, stdev=8342.55 00:42:05.244 clat percentiles (usec): 00:42:05.244 | 1.00th=[ 285], 5.00th=[ 322], 10.00th=[ 379], 20.00th=[ 416], 00:42:05.244 | 30.00th=[ 441], 40.00th=[ 457], 50.00th=[ 474], 60.00th=[ 494], 00:42:05.244 | 70.00th=[ 510], 80.00th=[ 545], 90.00th=[ 603], 95.00th=[ 709], 00:42:05.244 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:42:05.244 | 99.99th=[42730] 00:42:05.244 bw ( KiB/s): min= 112, max= 5104, per=9.87%, avg=1812.80, stdev=2047.35, samples=5 00:42:05.244 iops : min= 28, max= 1276, avg=453.20, stdev=511.84, samples=5 00:42:05.244 lat (usec) : 500=64.66%, 750=30.55%, 1000=0.39% 00:42:05.244 lat (msec) : 2=0.08%, 50=4.25% 00:42:05.244 cpu : usr=0.27%, sys=1.02%, ctx=1297, majf=0, minf=1 00:42:05.244 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:05.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.244 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.244 issued rwts: total=1293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:05.244 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:05.244 00:42:05.244 Run status group 0 (all jobs): 00:42:05.244 READ: bw=17.9MiB/s (18.8MB/s), 1761KiB/s-12.0MiB/s (1803kB/s-12.6MB/s), io=68.9MiB (72.3MB), run=2935-3846msec 00:42:05.244 00:42:05.244 Disk stats (read/write): 00:42:05.244 nvme0n1: ios=2720/0, merge=0/0, ticks=4432/0, in_queue=4432, util=98.60% 00:42:05.244 nvme0n2: ios=10992/0, merge=0/0, ticks=3328/0, in_queue=3328, util=95.23% 00:42:05.244 nvme0n3: ios=1598/0, merge=0/0, ticks=3018/0, in_queue=3018, util=96.01% 00:42:05.244 nvme0n4: ios=1329/0, merge=0/0, ticks=3225/0, in_queue=3225, util=99.73% 00:42:05.244 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:05.244 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:42:05.502 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:05.502 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:42:06.068 03:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:06.068 03:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:42:06.326 03:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:06.326 03:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:42:06.584 03:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:06.584 03:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:42:06.842 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:42:06.842 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3191372 00:42:06.842 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:42:06.842 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:07.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:07.777 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:07.777 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:42:07.777 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:42:07.777 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:07.777 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:42:07.777 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:07.777 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:42:07.777 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:42:07.777 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:42:07.777 nvmf hotplug test: fio failed as expected 00:42:07.777 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:08.034 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:42:08.034 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:42:08.034 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:42:08.034 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:42:08.035 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:42:08.035 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:08.035 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:42:08.035 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:08.035 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:42:08.035 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:08.035 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:08.035 rmmod nvme_tcp 00:42:08.035 rmmod nvme_fabrics 00:42:08.035 rmmod nvme_keyring 00:42:08.035 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:08.035 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:42:08.035 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:42:08.035 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3189350 ']' 00:42:08.035 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3189350 00:42:08.035 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3189350 ']' 00:42:08.035 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3189350 00:42:08.035 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:42:08.035 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:08.035 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3189350 00:42:08.293 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:08.293 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:08.293 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3189350' 00:42:08.293 killing process with pid 3189350 00:42:08.293 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3189350 00:42:08.293 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3189350 00:42:09.669 03:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:09.669 03:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:09.669 03:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:09.669 03:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:42:09.669 03:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:42:09.669 03:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:09.669 03:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:42:09.669 03:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:09.669 03:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:09.669 03:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:09.669 03:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:09.669 03:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:11.575 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:11.576 00:42:11.576 real 0m27.095s 00:42:11.576 user 1m13.429s 00:42:11.576 sys 0m10.606s 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:11.576 ************************************ 00:42:11.576 END TEST nvmf_fio_target 00:42:11.576 ************************************ 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:11.576 ************************************ 00:42:11.576 START TEST nvmf_bdevio 00:42:11.576 ************************************ 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:11.576 * Looking for test storage... 00:42:11.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:11.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:11.576 --rc genhtml_branch_coverage=1 00:42:11.576 --rc genhtml_function_coverage=1 00:42:11.576 --rc genhtml_legend=1 00:42:11.576 --rc geninfo_all_blocks=1 00:42:11.576 --rc geninfo_unexecuted_blocks=1 00:42:11.576 00:42:11.576 ' 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:11.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:11.576 --rc genhtml_branch_coverage=1 00:42:11.576 --rc genhtml_function_coverage=1 00:42:11.576 --rc genhtml_legend=1 00:42:11.576 --rc geninfo_all_blocks=1 00:42:11.576 --rc geninfo_unexecuted_blocks=1 00:42:11.576 00:42:11.576 ' 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:11.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:11.576 --rc genhtml_branch_coverage=1 00:42:11.576 --rc genhtml_function_coverage=1 00:42:11.576 --rc genhtml_legend=1 00:42:11.576 --rc geninfo_all_blocks=1 00:42:11.576 --rc geninfo_unexecuted_blocks=1 00:42:11.576 00:42:11.576 ' 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:11.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:11.576 --rc genhtml_branch_coverage=1 00:42:11.576 --rc genhtml_function_coverage=1 00:42:11.576 --rc genhtml_legend=1 00:42:11.576 --rc geninfo_all_blocks=1 00:42:11.576 --rc geninfo_unexecuted_blocks=1 00:42:11.576 00:42:11.576 ' 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:11.576 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:42:11.577 03:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:14.109 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:14.109 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:42:14.109 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:14.109 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:14.109 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:14.109 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:14.109 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:14.109 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:42:14.109 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:14.109 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:42:14.109 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:42:14.109 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:42:14.109 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:42:14.109 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:42:14.109 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:42:14.109 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:14.109 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:14.110 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:14.110 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:14.110 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:14.110 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:14.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:14.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:42:14.110 00:42:14.110 --- 10.0.0.2 ping statistics --- 00:42:14.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:14.110 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:14.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:14.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:42:14.110 00:42:14.110 --- 10.0.0.1 ping statistics --- 00:42:14.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:14.110 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:14.110 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:42:14.111 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:14.111 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:14.111 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:14.111 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3194472 00:42:14.111 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:42:14.111 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3194472 00:42:14.111 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3194472 ']' 00:42:14.111 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:14.111 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:14.111 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:14.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:14.111 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:14.111 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:14.111 [2024-11-17 03:02:22.332584] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:14.111 [2024-11-17 03:02:22.335167] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:42:14.111 [2024-11-17 03:02:22.335281] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:14.111 [2024-11-17 03:02:22.487339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:14.370 [2024-11-17 03:02:22.633267] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:14.370 [2024-11-17 03:02:22.633344] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:14.370 [2024-11-17 03:02:22.633374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:14.370 [2024-11-17 03:02:22.633397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:14.370 [2024-11-17 03:02:22.633419] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:14.370 [2024-11-17 03:02:22.636404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:42:14.370 [2024-11-17 03:02:22.636466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:42:14.370 [2024-11-17 03:02:22.636566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:14.370 [2024-11-17 03:02:22.636594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:42:14.628 [2024-11-17 03:02:23.004261] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:14.628 [2024-11-17 03:02:23.015457] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:14.628 [2024-11-17 03:02:23.015701] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:14.628 [2024-11-17 03:02:23.016530] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:14.628 [2024-11-17 03:02:23.016890] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:14.887 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:14.887 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:42:14.887 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:14.887 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:14.888 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:14.888 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:14.888 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:14.888 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.888 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:14.888 [2024-11-17 03:02:23.313680] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:14.888 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.888 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:14.888 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.888 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:15.146 Malloc0 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:15.146 [2024-11-17 03:02:23.437933] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:15.146 { 00:42:15.146 "params": { 00:42:15.146 "name": "Nvme$subsystem", 00:42:15.146 "trtype": "$TEST_TRANSPORT", 00:42:15.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:15.146 "adrfam": "ipv4", 00:42:15.146 "trsvcid": "$NVMF_PORT", 00:42:15.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:15.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:15.146 "hdgst": ${hdgst:-false}, 00:42:15.146 "ddgst": ${ddgst:-false} 00:42:15.146 }, 00:42:15.146 "method": "bdev_nvme_attach_controller" 00:42:15.146 } 00:42:15.146 EOF 00:42:15.146 )") 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:42:15.146 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:15.146 "params": { 00:42:15.146 "name": "Nvme1", 00:42:15.146 "trtype": "tcp", 00:42:15.146 "traddr": "10.0.0.2", 00:42:15.146 "adrfam": "ipv4", 00:42:15.146 "trsvcid": "4420", 00:42:15.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:15.146 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:15.146 "hdgst": false, 00:42:15.146 "ddgst": false 00:42:15.146 }, 00:42:15.146 "method": "bdev_nvme_attach_controller" 00:42:15.146 }' 00:42:15.146 [2024-11-17 03:02:23.520668] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:42:15.146 [2024-11-17 03:02:23.520799] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3194624 ] 00:42:15.405 [2024-11-17 03:02:23.658488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:15.405 [2024-11-17 03:02:23.793954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:15.405 [2024-11-17 03:02:23.794007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:15.405 [2024-11-17 03:02:23.794002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:15.971 I/O targets: 00:42:15.971 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:42:15.971 00:42:15.971 00:42:15.971 CUnit - A unit testing framework for C - Version 2.1-3 00:42:15.971 http://cunit.sourceforge.net/ 00:42:15.971 00:42:15.971 00:42:15.971 Suite: bdevio tests on: Nvme1n1 00:42:15.971 Test: blockdev write read block ...passed 00:42:15.971 Test: blockdev write zeroes read block ...passed 00:42:16.229 Test: blockdev write zeroes read no split ...passed 00:42:16.229 Test: blockdev write zeroes read split ...passed 00:42:16.229 Test: blockdev write zeroes read split partial ...passed 00:42:16.229 Test: blockdev reset ...[2024-11-17 03:02:24.535586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:42:16.229 [2024-11-17 03:02:24.535757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:42:16.229 [2024-11-17 03:02:24.543831] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:42:16.229 passed 00:42:16.229 Test: blockdev write read 8 blocks ...passed 00:42:16.229 Test: blockdev write read size > 128k ...passed 00:42:16.229 Test: blockdev write read invalid size ...passed 00:42:16.229 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:16.229 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:16.229 Test: blockdev write read max offset ...passed 00:42:16.229 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:16.229 Test: blockdev writev readv 8 blocks ...passed 00:42:16.229 Test: blockdev writev readv 30 x 1block ...passed 00:42:16.488 Test: blockdev writev readv block ...passed 00:42:16.488 Test: blockdev writev readv size > 128k ...passed 00:42:16.488 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:16.488 Test: blockdev comparev and writev ...[2024-11-17 03:02:24.758862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:16.488 [2024-11-17 03:02:24.758915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:16.488 [2024-11-17 03:02:24.758960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:16.488 [2024-11-17 03:02:24.758996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:16.488 [2024-11-17 03:02:24.759545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:16.488 [2024-11-17 03:02:24.759579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:42:16.488 [2024-11-17 03:02:24.759614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:16.488 [2024-11-17 03:02:24.759644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:42:16.488 [2024-11-17 03:02:24.760177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:16.488 [2024-11-17 03:02:24.760211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:42:16.488 [2024-11-17 03:02:24.760245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:16.488 [2024-11-17 03:02:24.760270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:42:16.488 [2024-11-17 03:02:24.760822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:16.488 [2024-11-17 03:02:24.760855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:42:16.488 [2024-11-17 03:02:24.760888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:16.488 [2024-11-17 03:02:24.760913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:42:16.488 passed 00:42:16.488 Test: blockdev nvme passthru rw ...passed 00:42:16.488 Test: blockdev nvme passthru vendor specific ...[2024-11-17 03:02:24.843476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:16.488 [2024-11-17 03:02:24.843516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:42:16.488 [2024-11-17 03:02:24.843780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:16.488 [2024-11-17 03:02:24.843814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:42:16.488 [2024-11-17 03:02:24.844028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:16.488 [2024-11-17 03:02:24.844060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:42:16.488 [2024-11-17 03:02:24.844282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:16.488 [2024-11-17 03:02:24.844315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:42:16.488 passed 00:42:16.488 Test: blockdev nvme admin passthru ...passed 00:42:16.488 Test: blockdev copy ...passed 00:42:16.488 00:42:16.488 Run Summary: Type Total Ran Passed Failed Inactive 00:42:16.488 suites 1 1 n/a 0 0 00:42:16.488 tests 23 23 23 0 0 00:42:16.488 asserts 152 152 152 0 n/a 00:42:16.488 00:42:16.488 Elapsed time = 1.162 seconds 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:17.422 rmmod nvme_tcp 00:42:17.422 rmmod nvme_fabrics 00:42:17.422 rmmod nvme_keyring 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3194472 ']' 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3194472 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3194472 ']' 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3194472 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3194472 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3194472' 00:42:17.422 killing process with pid 3194472 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3194472 00:42:17.422 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3194472 00:42:18.797 03:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:18.798 03:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:18.798 03:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:18.798 03:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:42:18.798 03:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:42:18.798 03:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:18.798 03:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:42:18.798 03:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:18.798 03:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:18.798 03:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:18.798 03:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:18.798 03:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:21.333 03:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:21.333 00:42:21.333 real 0m9.437s 00:42:21.333 user 0m17.224s 00:42:21.333 sys 0m3.099s 00:42:21.333 03:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:21.333 03:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:21.333 ************************************ 00:42:21.333 END TEST nvmf_bdevio 00:42:21.333 ************************************ 00:42:21.333 03:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:42:21.333 00:42:21.333 real 4m29.958s 00:42:21.333 user 9m54.033s 00:42:21.333 sys 1m28.200s 00:42:21.333 03:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:21.333 03:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:21.333 ************************************ 00:42:21.333 END TEST nvmf_target_core_interrupt_mode 00:42:21.333 ************************************ 00:42:21.333 03:02:29 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:21.333 03:02:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:21.333 03:02:29 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:21.333 03:02:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:21.333 ************************************ 00:42:21.333 START TEST nvmf_interrupt 00:42:21.333 ************************************ 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:21.333 * Looking for test storage... 00:42:21.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:21.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:21.333 --rc genhtml_branch_coverage=1 00:42:21.333 --rc genhtml_function_coverage=1 00:42:21.333 --rc genhtml_legend=1 00:42:21.333 --rc geninfo_all_blocks=1 00:42:21.333 --rc geninfo_unexecuted_blocks=1 00:42:21.333 00:42:21.333 ' 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:21.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:21.333 --rc genhtml_branch_coverage=1 00:42:21.333 --rc genhtml_function_coverage=1 00:42:21.333 --rc genhtml_legend=1 00:42:21.333 --rc geninfo_all_blocks=1 00:42:21.333 --rc geninfo_unexecuted_blocks=1 00:42:21.333 00:42:21.333 ' 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:21.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:21.333 --rc genhtml_branch_coverage=1 00:42:21.333 --rc genhtml_function_coverage=1 00:42:21.333 --rc genhtml_legend=1 00:42:21.333 --rc geninfo_all_blocks=1 00:42:21.333 --rc geninfo_unexecuted_blocks=1 00:42:21.333 00:42:21.333 ' 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:21.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:21.333 --rc genhtml_branch_coverage=1 00:42:21.333 --rc genhtml_function_coverage=1 00:42:21.333 --rc genhtml_legend=1 00:42:21.333 --rc geninfo_all_blocks=1 00:42:21.333 --rc geninfo_unexecuted_blocks=1 00:42:21.333 00:42:21.333 ' 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:21.333 03:02:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:42:21.334 03:02:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:23.295 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:23.295 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:23.295 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:23.295 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:23.295 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:23.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:23.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:42:23.296 00:42:23.296 --- 10.0.0.2 ping statistics --- 00:42:23.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:23.296 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:23.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:23.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:42:23.296 00:42:23.296 --- 10.0.0.1 ping statistics --- 00:42:23.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:23.296 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3196974 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3196974 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3196974 ']' 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:23.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:23.296 03:02:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:23.296 [2024-11-17 03:02:31.744644] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:23.296 [2024-11-17 03:02:31.747395] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:42:23.296 [2024-11-17 03:02:31.747514] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:23.554 [2024-11-17 03:02:31.895618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:23.812 [2024-11-17 03:02:32.031112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:23.812 [2024-11-17 03:02:32.031193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:23.812 [2024-11-17 03:02:32.031221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:23.812 [2024-11-17 03:02:32.031251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:23.812 [2024-11-17 03:02:32.031282] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:23.812 [2024-11-17 03:02:32.033886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:23.812 [2024-11-17 03:02:32.033895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:24.070 [2024-11-17 03:02:32.373013] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:24.071 [2024-11-17 03:02:32.373622] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:24.071 [2024-11-17 03:02:32.373897] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:24.329 03:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:24.329 03:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:42:24.329 03:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:24.329 03:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:24.329 03:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:24.329 03:02:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:24.329 03:02:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:42:24.329 03:02:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:42:24.329 03:02:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:42:24.329 03:02:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:42:24.588 5000+0 records in 00:42:24.588 5000+0 records out 00:42:24.588 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0148733 s, 688 MB/s 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:24.588 AIO0 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:24.588 [2024-11-17 03:02:32.843003] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:24.588 [2024-11-17 03:02:32.871280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3196974 0 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3196974 0 idle 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196974 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196974 -w 256 00:42:24.588 03:02:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196974 root 20 0 20.1t 195456 100224 S 0.0 0.3 0:00.72 reactor_0' 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196974 root 20 0 20.1t 195456 100224 S 0.0 0.3 0:00.72 reactor_0 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3196974 1 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3196974 1 idle 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196974 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:24.588 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196974 -w 256 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196978 root 20 0 20.1t 195456 100224 S 0.0 0.3 0:00.00 reactor_1' 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196978 root 20 0 20.1t 195456 100224 S 0.0 0.3 0:00.00 reactor_1 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3197149 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3196974 0 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3196974 0 busy 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196974 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196974 -w 256 00:42:24.849 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:25.108 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196974 root 20 0 20.1t 196608 100992 S 0.0 0.3 0:00.73 reactor_0' 00:42:25.108 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196974 root 20 0 20.1t 196608 100992 S 0.0 0.3 0:00.73 reactor_0 00:42:25.108 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:25.108 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:25.108 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:25.108 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:25.108 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:25.108 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:25.108 03:02:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:42:26.043 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:42:26.043 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:26.043 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196974 -w 256 00:42:26.043 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196974 root 20 0 20.1t 208896 100992 R 99.9 0.3 0:02.76 reactor_0' 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196974 root 20 0 20.1t 208896 100992 R 99.9 0.3 0:02.76 reactor_0 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3196974 1 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3196974 1 busy 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196974 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196974 -w 256 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196978 root 20 0 20.1t 208896 100992 R 93.3 0.3 0:01.11 reactor_1' 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196978 root 20 0 20.1t 208896 100992 R 93.3 0.3 0:01.11 reactor_1 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:26.302 03:02:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3197149 00:42:36.277 Initializing NVMe Controllers 00:42:36.277 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:36.277 Controller IO queue size 256, less than required. 00:42:36.277 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:36.277 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:42:36.277 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:42:36.277 Initialization complete. Launching workers. 00:42:36.277 ======================================================== 00:42:36.277 Latency(us) 00:42:36.277 Device Information : IOPS MiB/s Average min max 00:42:36.277 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 9538.99 37.26 26864.44 6285.35 32341.06 00:42:36.277 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 10850.69 42.39 23611.12 6827.28 28946.46 00:42:36.277 ======================================================== 00:42:36.277 Total : 20389.67 79.65 25133.14 6285.35 32341.06 00:42:36.277 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3196974 0 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3196974 0 idle 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196974 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196974 -w 256 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196974 root 20 0 20.1t 208896 100992 S 0.0 0.3 0:19.52 reactor_0' 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196974 root 20 0 20.1t 208896 100992 S 0.0 0.3 0:19.52 reactor_0 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3196974 1 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3196974 1 idle 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196974 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196974 -w 256 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196978 root 20 0 20.1t 208896 100992 S 0.0 0.3 0:08.82 reactor_1' 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196978 root 20 0 20.1t 208896 100992 S 0.0 0.3 0:08.82 reactor_1 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:36.277 03:02:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:36.277 03:02:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:42:36.277 03:02:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:42:36.277 03:02:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:42:36.277 03:02:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:42:36.277 03:02:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3196974 0 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3196974 0 idle 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196974 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196974 -w 256 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196974 root 20 0 20.1t 236544 110592 S 0.0 0.4 0:19.69 reactor_0' 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196974 root 20 0 20.1t 236544 110592 S 0.0 0.4 0:19.69 reactor_0 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3196974 1 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3196974 1 idle 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196974 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196974 -w 256 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196978 root 20 0 20.1t 236544 110592 S 0.0 0.4 0:08.88 reactor_1' 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196978 root 20 0 20.1t 236544 110592 S 0.0 0.4 0:08.88 reactor_1 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:38.180 03:02:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:38.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:38.747 03:02:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:38.747 03:02:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:42:38.747 03:02:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:42:38.747 03:02:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:38.747 03:02:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:42:38.747 03:02:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:38.747 03:02:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:42:38.747 03:02:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:42:38.747 03:02:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:42:38.747 03:02:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:38.747 03:02:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:42:38.747 03:02:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:38.747 03:02:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:42:38.747 03:02:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:38.748 03:02:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:38.748 rmmod nvme_tcp 00:42:38.748 rmmod nvme_fabrics 00:42:38.748 rmmod nvme_keyring 00:42:38.748 03:02:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:38.748 03:02:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:42:38.748 03:02:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:42:38.748 03:02:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3196974 ']' 00:42:38.748 03:02:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3196974 00:42:38.748 03:02:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3196974 ']' 00:42:38.748 03:02:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3196974 00:42:38.748 03:02:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:42:38.748 03:02:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:38.748 03:02:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3196974 00:42:38.748 03:02:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:38.748 03:02:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:38.748 03:02:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3196974' 00:42:38.748 killing process with pid 3196974 00:42:38.748 03:02:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3196974 00:42:38.748 03:02:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3196974 00:42:40.124 03:02:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:40.124 03:02:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:40.124 03:02:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:40.124 03:02:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:42:40.124 03:02:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:42:40.124 03:02:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:40.124 03:02:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:42:40.124 03:02:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:40.124 03:02:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:40.124 03:02:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:40.124 03:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:40.124 03:02:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:42.027 03:02:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:42.027 00:42:42.027 real 0m20.958s 00:42:42.027 user 0m37.067s 00:42:42.027 sys 0m7.616s 00:42:42.027 03:02:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:42.027 03:02:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:42.027 ************************************ 00:42:42.027 END TEST nvmf_interrupt 00:42:42.027 ************************************ 00:42:42.027 00:42:42.027 real 35m39.078s 00:42:42.027 user 93m37.046s 00:42:42.027 sys 7m51.807s 00:42:42.027 03:02:50 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:42.027 03:02:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:42.027 ************************************ 00:42:42.027 END TEST nvmf_tcp 00:42:42.027 ************************************ 00:42:42.027 03:02:50 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:42:42.027 03:02:50 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:42.027 03:02:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:42.027 03:02:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:42.027 03:02:50 -- common/autotest_common.sh@10 -- # set +x 00:42:42.027 ************************************ 00:42:42.027 START TEST spdkcli_nvmf_tcp 00:42:42.027 ************************************ 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:42.027 * Looking for test storage... 00:42:42.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:42.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:42.027 --rc genhtml_branch_coverage=1 00:42:42.027 --rc genhtml_function_coverage=1 00:42:42.027 --rc genhtml_legend=1 00:42:42.027 --rc geninfo_all_blocks=1 00:42:42.027 --rc geninfo_unexecuted_blocks=1 00:42:42.027 00:42:42.027 ' 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:42.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:42.027 --rc genhtml_branch_coverage=1 00:42:42.027 --rc genhtml_function_coverage=1 00:42:42.027 --rc genhtml_legend=1 00:42:42.027 --rc geninfo_all_blocks=1 00:42:42.027 --rc geninfo_unexecuted_blocks=1 00:42:42.027 00:42:42.027 ' 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:42.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:42.027 --rc genhtml_branch_coverage=1 00:42:42.027 --rc genhtml_function_coverage=1 00:42:42.027 --rc genhtml_legend=1 00:42:42.027 --rc geninfo_all_blocks=1 00:42:42.027 --rc geninfo_unexecuted_blocks=1 00:42:42.027 00:42:42.027 ' 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:42.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:42.027 --rc genhtml_branch_coverage=1 00:42:42.027 --rc genhtml_function_coverage=1 00:42:42.027 --rc genhtml_legend=1 00:42:42.027 --rc geninfo_all_blocks=1 00:42:42.027 --rc geninfo_unexecuted_blocks=1 00:42:42.027 00:42:42.027 ' 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:42.027 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:42.028 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:42.028 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:42.028 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:42:42.286 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:42.286 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:42.286 03:02:50 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:42.286 03:02:50 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:42.286 03:02:50 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:42.286 03:02:50 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:42.286 03:02:50 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:42:42.286 03:02:50 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:42.286 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:42:42.286 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:42.286 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:42.286 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:42.286 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:42.287 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:42.287 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:42.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:42.287 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:42.287 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:42.287 03:02:50 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:42.287 03:02:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:42:42.287 03:02:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:42:42.287 03:02:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:42:42.287 03:02:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:42:42.287 03:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:42.287 03:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:42.287 03:02:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:42:42.287 03:02:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3199286 00:42:42.287 03:02:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:42:42.287 03:02:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3199286 00:42:42.287 03:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3199286 ']' 00:42:42.287 03:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:42.287 03:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:42.287 03:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:42.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:42.287 03:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:42.287 03:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:42.287 [2024-11-17 03:02:50.591106] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:42:42.287 [2024-11-17 03:02:50.591251] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3199286 ] 00:42:42.287 [2024-11-17 03:02:50.729890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:42.545 [2024-11-17 03:02:50.863729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:42.545 [2024-11-17 03:02:50.863732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:43.478 03:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:43.478 03:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:42:43.478 03:02:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:42:43.478 03:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:43.478 03:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:43.478 03:02:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:42:43.478 03:02:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:42:43.478 03:02:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:42:43.478 03:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:43.478 03:02:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:43.478 03:02:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:42:43.478 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:42:43.478 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:42:43.478 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:42:43.478 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:42:43.478 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:42:43.478 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:42:43.478 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:43.478 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:42:43.478 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:42:43.478 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:43.478 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:43.479 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:42:43.479 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:43.479 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:43.479 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:42:43.479 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:43.479 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:43.479 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:43.479 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:43.479 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:42:43.479 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:42:43.479 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:43.479 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:42:43.479 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:43.479 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:42:43.479 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:42:43.479 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:42:43.479 ' 00:42:46.008 [2024-11-17 03:02:54.398090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:47.382 [2024-11-17 03:02:55.675723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:42:49.911 [2024-11-17 03:02:58.031345] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:42:51.884 [2024-11-17 03:03:00.070122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:42:53.258 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:42:53.258 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:42:53.258 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:42:53.258 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:42:53.258 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:42:53.258 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:42:53.258 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:42:53.258 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:53.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:42:53.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:42:53.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:53.258 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:53.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:42:53.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:53.258 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:53.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:42:53.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:53.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:53.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:53.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:53.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:42:53.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:42:53.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:53.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:42:53.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:53.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:42:53.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:42:53.258 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:42:53.517 03:03:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:42:53.517 03:03:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:53.517 03:03:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:53.517 03:03:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:42:53.517 03:03:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:53.517 03:03:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:53.517 03:03:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:42:53.517 03:03:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:42:53.775 03:03:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:42:53.775 03:03:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:42:53.775 03:03:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:42:53.775 03:03:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:53.775 03:03:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:54.032 03:03:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:42:54.032 03:03:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:54.032 03:03:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:54.032 03:03:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:42:54.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:42:54.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:54.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:42:54.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:42:54.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:42:54.032 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:42:54.032 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:54.032 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:42:54.032 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:42:54.032 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:42:54.032 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:42:54.033 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:42:54.033 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:42:54.033 ' 00:43:00.589 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:43:00.589 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:43:00.589 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:43:00.589 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:43:00.589 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:43:00.589 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:43:00.589 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:43:00.589 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:43:00.589 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:43:00.590 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:43:00.590 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:43:00.590 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:43:00.590 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:43:00.590 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:43:00.590 03:03:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:43:00.590 03:03:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:00.590 03:03:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:00.590 03:03:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3199286 00:43:00.590 03:03:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3199286 ']' 00:43:00.590 03:03:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3199286 00:43:00.590 03:03:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:43:00.590 03:03:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:00.590 03:03:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3199286 00:43:00.590 03:03:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:00.590 03:03:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:00.590 03:03:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3199286' 00:43:00.590 killing process with pid 3199286 00:43:00.590 03:03:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3199286 00:43:00.590 03:03:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3199286 00:43:00.848 03:03:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:43:00.848 03:03:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:43:00.848 03:03:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3199286 ']' 00:43:00.848 03:03:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3199286 00:43:00.848 03:03:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3199286 ']' 00:43:00.848 03:03:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3199286 00:43:00.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3199286) - No such process 00:43:00.848 03:03:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3199286 is not found' 00:43:00.848 Process with pid 3199286 is not found 00:43:00.848 03:03:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:43:00.848 03:03:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:43:00.848 03:03:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:43:00.848 00:43:00.848 real 0m18.969s 00:43:00.848 user 0m39.698s 00:43:00.848 sys 0m1.019s 00:43:00.848 03:03:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:00.848 03:03:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:00.848 ************************************ 00:43:00.848 END TEST spdkcli_nvmf_tcp 00:43:00.848 ************************************ 00:43:01.107 03:03:09 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:43:01.107 03:03:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:01.107 03:03:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:01.107 03:03:09 -- common/autotest_common.sh@10 -- # set +x 00:43:01.107 ************************************ 00:43:01.107 START TEST nvmf_identify_passthru 00:43:01.107 ************************************ 00:43:01.107 03:03:09 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:43:01.107 * Looking for test storage... 00:43:01.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:01.107 03:03:09 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:01.107 03:03:09 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:43:01.107 03:03:09 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:01.107 03:03:09 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:43:01.107 03:03:09 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:01.107 03:03:09 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:01.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:01.107 --rc genhtml_branch_coverage=1 00:43:01.107 --rc genhtml_function_coverage=1 00:43:01.107 --rc genhtml_legend=1 00:43:01.107 --rc geninfo_all_blocks=1 00:43:01.107 --rc geninfo_unexecuted_blocks=1 00:43:01.107 00:43:01.107 ' 00:43:01.107 03:03:09 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:01.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:01.107 --rc genhtml_branch_coverage=1 00:43:01.107 --rc genhtml_function_coverage=1 00:43:01.107 --rc genhtml_legend=1 00:43:01.107 --rc geninfo_all_blocks=1 00:43:01.107 --rc geninfo_unexecuted_blocks=1 00:43:01.107 00:43:01.107 ' 00:43:01.107 03:03:09 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:01.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:01.107 --rc genhtml_branch_coverage=1 00:43:01.107 --rc genhtml_function_coverage=1 00:43:01.107 --rc genhtml_legend=1 00:43:01.107 --rc geninfo_all_blocks=1 00:43:01.107 --rc geninfo_unexecuted_blocks=1 00:43:01.107 00:43:01.107 ' 00:43:01.107 03:03:09 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:01.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:01.107 --rc genhtml_branch_coverage=1 00:43:01.107 --rc genhtml_function_coverage=1 00:43:01.107 --rc genhtml_legend=1 00:43:01.107 --rc geninfo_all_blocks=1 00:43:01.107 --rc geninfo_unexecuted_blocks=1 00:43:01.107 00:43:01.107 ' 00:43:01.107 03:03:09 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:01.107 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:43:01.107 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:01.107 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:01.107 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:01.107 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:01.107 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:01.107 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:01.107 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:01.107 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:01.107 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:01.107 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:01.107 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:01.107 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:01.107 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:01.107 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:01.107 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:01.107 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:01.107 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:01.107 03:03:09 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:01.107 03:03:09 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:01.107 03:03:09 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:01.107 03:03:09 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:01.107 03:03:09 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:43:01.108 03:03:09 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:01.108 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:43:01.108 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:01.108 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:01.108 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:01.108 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:01.108 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:01.108 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:01.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:01.108 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:01.108 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:01.108 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:01.108 03:03:09 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:01.108 03:03:09 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:43:01.108 03:03:09 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:01.108 03:03:09 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:01.108 03:03:09 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:01.108 03:03:09 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:01.108 03:03:09 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:01.108 03:03:09 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:01.108 03:03:09 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:43:01.108 03:03:09 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:01.108 03:03:09 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:43:01.108 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:01.108 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:01.108 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:01.108 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:01.108 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:01.108 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:01.108 03:03:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:01.108 03:03:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:01.108 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:01.108 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:01.108 03:03:09 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:43:01.108 03:03:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:43:03.665 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:43:03.665 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:43:03.665 Found net devices under 0000:0a:00.0: cvl_0_0 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:43:03.665 Found net devices under 0000:0a:00.1: cvl_0_1 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:03.665 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:03.666 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:03.666 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:03.666 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:03.666 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:03.666 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:03.666 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:03.666 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:03.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:03.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:43:03.666 00:43:03.666 --- 10.0.0.2 ping statistics --- 00:43:03.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:03.666 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:43:03.666 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:03.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:03.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:43:03.666 00:43:03.666 --- 10.0.0.1 ping statistics --- 00:43:03.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:03.666 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:43:03.666 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:03.666 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:43:03.666 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:03.666 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:03.666 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:03.666 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:03.666 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:03.666 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:03.666 03:03:11 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:03.666 03:03:11 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:43:03.666 03:03:11 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:03.666 03:03:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:03.666 03:03:11 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:43:03.666 03:03:11 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:43:03.666 03:03:11 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:43:03.666 03:03:11 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:43:03.666 03:03:11 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:43:03.666 03:03:11 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:43:03.666 03:03:11 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:43:03.666 03:03:11 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:43:03.666 03:03:11 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:43:03.666 03:03:11 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:43:03.666 03:03:11 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:43:03.666 03:03:11 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:43:03.666 03:03:11 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:43:03.666 03:03:11 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:43:03.666 03:03:11 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:43:03.666 03:03:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:43:03.666 03:03:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:43:03.666 03:03:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:43:07.850 03:03:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:43:07.850 03:03:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:43:07.850 03:03:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:43:07.850 03:03:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:43:13.117 03:03:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:43:13.117 03:03:20 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:43:13.117 03:03:20 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:13.117 03:03:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:13.117 03:03:20 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:43:13.117 03:03:20 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:13.117 03:03:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:13.117 03:03:20 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3204859 00:43:13.117 03:03:20 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:43:13.117 03:03:20 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:13.117 03:03:20 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3204859 00:43:13.117 03:03:20 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3204859 ']' 00:43:13.117 03:03:20 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:13.117 03:03:20 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:13.117 03:03:20 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:13.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:13.117 03:03:20 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:13.117 03:03:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:13.117 [2024-11-17 03:03:20.677637] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:43:13.117 [2024-11-17 03:03:20.677800] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:13.117 [2024-11-17 03:03:20.831947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:13.117 [2024-11-17 03:03:20.979306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:13.117 [2024-11-17 03:03:20.979390] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:13.117 [2024-11-17 03:03:20.979416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:13.117 [2024-11-17 03:03:20.979440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:13.117 [2024-11-17 03:03:20.979460] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:13.117 [2024-11-17 03:03:20.982280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:13.117 [2024-11-17 03:03:20.982340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:13.117 [2024-11-17 03:03:20.982392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:13.117 [2024-11-17 03:03:20.982398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:13.375 03:03:21 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:13.375 03:03:21 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:43:13.375 03:03:21 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:43:13.375 03:03:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:13.375 03:03:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:13.375 INFO: Log level set to 20 00:43:13.375 INFO: Requests: 00:43:13.375 { 00:43:13.375 "jsonrpc": "2.0", 00:43:13.375 "method": "nvmf_set_config", 00:43:13.375 "id": 1, 00:43:13.375 "params": { 00:43:13.375 "admin_cmd_passthru": { 00:43:13.375 "identify_ctrlr": true 00:43:13.375 } 00:43:13.375 } 00:43:13.375 } 00:43:13.375 00:43:13.375 INFO: response: 00:43:13.375 { 00:43:13.375 "jsonrpc": "2.0", 00:43:13.375 "id": 1, 00:43:13.375 "result": true 00:43:13.375 } 00:43:13.375 00:43:13.375 03:03:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:13.375 03:03:21 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:43:13.375 03:03:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:13.375 03:03:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:13.375 INFO: Setting log level to 20 00:43:13.375 INFO: Setting log level to 20 00:43:13.375 INFO: Log level set to 20 00:43:13.375 INFO: Log level set to 20 00:43:13.375 INFO: Requests: 00:43:13.375 { 00:43:13.375 "jsonrpc": "2.0", 00:43:13.375 "method": "framework_start_init", 00:43:13.375 "id": 1 00:43:13.375 } 00:43:13.375 00:43:13.375 INFO: Requests: 00:43:13.375 { 00:43:13.375 "jsonrpc": "2.0", 00:43:13.375 "method": "framework_start_init", 00:43:13.375 "id": 1 00:43:13.375 } 00:43:13.375 00:43:13.633 [2024-11-17 03:03:21.960018] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:43:13.633 INFO: response: 00:43:13.633 { 00:43:13.633 "jsonrpc": "2.0", 00:43:13.633 "id": 1, 00:43:13.633 "result": true 00:43:13.633 } 00:43:13.633 00:43:13.633 INFO: response: 00:43:13.633 { 00:43:13.633 "jsonrpc": "2.0", 00:43:13.633 "id": 1, 00:43:13.633 "result": true 00:43:13.633 } 00:43:13.633 00:43:13.633 03:03:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:13.633 03:03:21 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:13.633 03:03:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:13.633 03:03:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:13.633 INFO: Setting log level to 40 00:43:13.633 INFO: Setting log level to 40 00:43:13.633 INFO: Setting log level to 40 00:43:13.634 [2024-11-17 03:03:21.973005] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:13.634 03:03:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:13.634 03:03:21 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:43:13.634 03:03:21 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:13.634 03:03:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:13.634 03:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:43:13.634 03:03:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:13.634 03:03:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:16.913 Nvme0n1 00:43:16.913 03:03:24 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.913 03:03:24 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:43:16.913 03:03:24 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.913 03:03:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:16.913 03:03:24 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.913 03:03:24 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:43:16.913 03:03:24 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.913 03:03:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:16.913 03:03:24 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.913 03:03:24 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:16.913 03:03:24 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.913 03:03:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:16.913 [2024-11-17 03:03:24.928940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:16.913 03:03:24 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.913 03:03:24 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:43:16.913 03:03:24 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.913 03:03:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:16.913 [ 00:43:16.913 { 00:43:16.913 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:43:16.913 "subtype": "Discovery", 00:43:16.913 "listen_addresses": [], 00:43:16.913 "allow_any_host": true, 00:43:16.913 "hosts": [] 00:43:16.913 }, 00:43:16.913 { 00:43:16.913 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:43:16.913 "subtype": "NVMe", 00:43:16.913 "listen_addresses": [ 00:43:16.913 { 00:43:16.913 "trtype": "TCP", 00:43:16.913 "adrfam": "IPv4", 00:43:16.913 "traddr": "10.0.0.2", 00:43:16.913 "trsvcid": "4420" 00:43:16.913 } 00:43:16.913 ], 00:43:16.913 "allow_any_host": true, 00:43:16.913 "hosts": [], 00:43:16.913 "serial_number": "SPDK00000000000001", 00:43:16.913 "model_number": "SPDK bdev Controller", 00:43:16.913 "max_namespaces": 1, 00:43:16.913 "min_cntlid": 1, 00:43:16.913 "max_cntlid": 65519, 00:43:16.913 "namespaces": [ 00:43:16.913 { 00:43:16.913 "nsid": 1, 00:43:16.913 "bdev_name": "Nvme0n1", 00:43:16.913 "name": "Nvme0n1", 00:43:16.913 "nguid": "AB34904BAFD04F73B5AF25BDE3691B63", 00:43:16.913 "uuid": "ab34904b-afd0-4f73-b5af-25bde3691b63" 00:43:16.913 } 00:43:16.913 ] 00:43:16.913 } 00:43:16.913 ] 00:43:16.913 03:03:24 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.913 03:03:24 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:16.913 03:03:24 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:43:16.913 03:03:24 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:43:16.913 03:03:25 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:43:16.913 03:03:25 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:16.913 03:03:25 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:43:16.913 03:03:25 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:43:17.171 03:03:25 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:43:17.171 03:03:25 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:43:17.171 03:03:25 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:43:17.171 03:03:25 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:17.171 03:03:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:17.171 03:03:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:17.429 03:03:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:17.429 03:03:25 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:43:17.429 03:03:25 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:43:17.429 03:03:25 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:17.429 03:03:25 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:43:17.429 03:03:25 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:17.429 03:03:25 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:43:17.429 03:03:25 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:17.429 03:03:25 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:17.429 rmmod nvme_tcp 00:43:17.429 rmmod nvme_fabrics 00:43:17.429 rmmod nvme_keyring 00:43:17.429 03:03:25 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:17.429 03:03:25 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:43:17.429 03:03:25 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:43:17.429 03:03:25 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3204859 ']' 00:43:17.429 03:03:25 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3204859 00:43:17.429 03:03:25 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3204859 ']' 00:43:17.429 03:03:25 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3204859 00:43:17.429 03:03:25 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:43:17.429 03:03:25 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:17.429 03:03:25 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3204859 00:43:17.429 03:03:25 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:17.429 03:03:25 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:17.429 03:03:25 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3204859' 00:43:17.429 killing process with pid 3204859 00:43:17.429 03:03:25 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3204859 00:43:17.429 03:03:25 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3204859 00:43:19.957 03:03:28 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:19.957 03:03:28 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:19.957 03:03:28 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:19.957 03:03:28 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:43:19.957 03:03:28 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:43:19.957 03:03:28 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:19.957 03:03:28 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:43:19.957 03:03:28 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:19.957 03:03:28 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:19.957 03:03:28 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:19.957 03:03:28 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:19.957 03:03:28 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:21.869 03:03:30 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:21.869 00:43:21.869 real 0m20.935s 00:43:21.869 user 0m33.980s 00:43:21.869 sys 0m3.605s 00:43:21.869 03:03:30 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:21.869 03:03:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:21.869 ************************************ 00:43:21.869 END TEST nvmf_identify_passthru 00:43:21.869 ************************************ 00:43:21.869 03:03:30 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:21.869 03:03:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:21.869 03:03:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:21.869 03:03:30 -- common/autotest_common.sh@10 -- # set +x 00:43:21.869 ************************************ 00:43:21.869 START TEST nvmf_dif 00:43:21.869 ************************************ 00:43:21.869 03:03:30 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:22.129 * Looking for test storage... 00:43:22.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:22.129 03:03:30 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:22.129 03:03:30 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:43:22.129 03:03:30 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:22.129 03:03:30 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:43:22.129 03:03:30 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:22.129 03:03:30 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:22.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:22.129 --rc genhtml_branch_coverage=1 00:43:22.129 --rc genhtml_function_coverage=1 00:43:22.129 --rc genhtml_legend=1 00:43:22.129 --rc geninfo_all_blocks=1 00:43:22.129 --rc geninfo_unexecuted_blocks=1 00:43:22.129 00:43:22.129 ' 00:43:22.129 03:03:30 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:22.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:22.129 --rc genhtml_branch_coverage=1 00:43:22.129 --rc genhtml_function_coverage=1 00:43:22.129 --rc genhtml_legend=1 00:43:22.129 --rc geninfo_all_blocks=1 00:43:22.129 --rc geninfo_unexecuted_blocks=1 00:43:22.129 00:43:22.129 ' 00:43:22.129 03:03:30 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:22.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:22.129 --rc genhtml_branch_coverage=1 00:43:22.129 --rc genhtml_function_coverage=1 00:43:22.129 --rc genhtml_legend=1 00:43:22.129 --rc geninfo_all_blocks=1 00:43:22.129 --rc geninfo_unexecuted_blocks=1 00:43:22.129 00:43:22.129 ' 00:43:22.129 03:03:30 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:22.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:22.129 --rc genhtml_branch_coverage=1 00:43:22.129 --rc genhtml_function_coverage=1 00:43:22.129 --rc genhtml_legend=1 00:43:22.129 --rc geninfo_all_blocks=1 00:43:22.129 --rc geninfo_unexecuted_blocks=1 00:43:22.129 00:43:22.129 ' 00:43:22.129 03:03:30 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:22.129 03:03:30 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:43:22.129 03:03:30 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:22.129 03:03:30 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:22.129 03:03:30 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:22.129 03:03:30 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:22.129 03:03:30 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:22.129 03:03:30 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:22.129 03:03:30 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:22.129 03:03:30 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:22.129 03:03:30 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:22.129 03:03:30 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:22.129 03:03:30 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:22.129 03:03:30 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:22.129 03:03:30 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:22.129 03:03:30 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:22.129 03:03:30 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:22.129 03:03:30 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:22.129 03:03:30 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:22.129 03:03:30 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:22.129 03:03:30 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:22.130 03:03:30 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:22.130 03:03:30 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:22.130 03:03:30 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:43:22.130 03:03:30 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:22.130 03:03:30 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:43:22.130 03:03:30 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:22.130 03:03:30 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:22.130 03:03:30 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:22.130 03:03:30 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:22.130 03:03:30 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:22.130 03:03:30 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:22.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:22.130 03:03:30 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:22.130 03:03:30 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:22.130 03:03:30 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:22.130 03:03:30 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:43:22.130 03:03:30 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:43:22.130 03:03:30 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:43:22.130 03:03:30 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:43:22.130 03:03:30 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:43:22.130 03:03:30 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:22.130 03:03:30 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:22.130 03:03:30 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:22.130 03:03:30 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:22.130 03:03:30 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:22.130 03:03:30 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:22.130 03:03:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:22.130 03:03:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:22.130 03:03:30 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:22.130 03:03:30 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:22.130 03:03:30 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:43:22.130 03:03:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:43:24.035 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:43:24.035 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:43:24.035 Found net devices under 0000:0a:00.0: cvl_0_0 00:43:24.035 03:03:32 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:43:24.294 Found net devices under 0000:0a:00.1: cvl_0_1 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:24.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:24.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:43:24.294 00:43:24.294 --- 10.0.0.2 ping statistics --- 00:43:24.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:24.294 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:24.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:24.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:43:24.294 00:43:24.294 --- 10.0.0.1 ping statistics --- 00:43:24.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:24.294 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:43:24.294 03:03:32 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:25.671 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:25.671 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:43:25.671 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:25.671 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:25.671 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:25.671 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:25.671 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:25.671 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:25.671 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:25.671 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:25.671 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:25.671 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:25.671 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:25.671 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:25.671 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:25.671 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:25.671 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:25.671 03:03:33 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:25.671 03:03:33 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:25.671 03:03:33 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:25.671 03:03:33 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:25.671 03:03:33 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:25.671 03:03:33 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:25.671 03:03:33 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:43:25.671 03:03:33 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:43:25.671 03:03:33 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:25.671 03:03:33 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:25.671 03:03:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:25.671 03:03:33 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3208333 00:43:25.671 03:03:33 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:43:25.671 03:03:33 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3208333 00:43:25.671 03:03:33 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3208333 ']' 00:43:25.671 03:03:33 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:25.671 03:03:33 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:25.671 03:03:33 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:25.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:25.671 03:03:33 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:25.671 03:03:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:25.671 [2024-11-17 03:03:34.063089] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:43:25.671 [2024-11-17 03:03:34.063243] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:25.929 [2024-11-17 03:03:34.202392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:25.930 [2024-11-17 03:03:34.321645] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:25.930 [2024-11-17 03:03:34.321722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:25.930 [2024-11-17 03:03:34.321743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:25.930 [2024-11-17 03:03:34.321763] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:25.930 [2024-11-17 03:03:34.321779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:25.930 [2024-11-17 03:03:34.323231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:26.865 03:03:35 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:26.865 03:03:35 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:43:26.866 03:03:35 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:26.866 03:03:35 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:26.866 03:03:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:26.866 03:03:35 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:26.866 03:03:35 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:43:26.866 03:03:35 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:43:26.866 03:03:35 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.866 03:03:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:26.866 [2024-11-17 03:03:35.071090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:26.866 03:03:35 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.866 03:03:35 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:43:26.866 03:03:35 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:26.866 03:03:35 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:26.866 03:03:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:26.866 ************************************ 00:43:26.866 START TEST fio_dif_1_default 00:43:26.866 ************************************ 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:26.866 bdev_null0 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:26.866 [2024-11-17 03:03:35.131493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:26.866 { 00:43:26.866 "params": { 00:43:26.866 "name": "Nvme$subsystem", 00:43:26.866 "trtype": "$TEST_TRANSPORT", 00:43:26.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:26.866 "adrfam": "ipv4", 00:43:26.866 "trsvcid": "$NVMF_PORT", 00:43:26.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:26.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:26.866 "hdgst": ${hdgst:-false}, 00:43:26.866 "ddgst": ${ddgst:-false} 00:43:26.866 }, 00:43:26.866 "method": "bdev_nvme_attach_controller" 00:43:26.866 } 00:43:26.866 EOF 00:43:26.866 )") 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:26.866 "params": { 00:43:26.866 "name": "Nvme0", 00:43:26.866 "trtype": "tcp", 00:43:26.866 "traddr": "10.0.0.2", 00:43:26.866 "adrfam": "ipv4", 00:43:26.866 "trsvcid": "4420", 00:43:26.866 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:26.866 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:26.866 "hdgst": false, 00:43:26.866 "ddgst": false 00:43:26.866 }, 00:43:26.866 "method": "bdev_nvme_attach_controller" 00:43:26.866 }' 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # break 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:26.866 03:03:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:27.125 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:27.125 fio-3.35 00:43:27.125 Starting 1 thread 00:43:39.323 00:43:39.324 filename0: (groupid=0, jobs=1): err= 0: pid=3208683: Sun Nov 17 03:03:46 2024 00:43:39.324 read: IOPS=142, BW=571KiB/s (585kB/s)(5712KiB/10006msec) 00:43:39.324 slat (nsec): min=5791, max=47662, avg=15759.77, stdev=6041.73 00:43:39.324 clat (usec): min=703, max=44431, avg=27978.99, stdev=18861.83 00:43:39.324 lat (usec): min=720, max=44466, avg=27994.75, stdev=18861.12 00:43:39.324 clat percentiles (usec): 00:43:39.324 | 1.00th=[ 725], 5.00th=[ 766], 10.00th=[ 791], 20.00th=[ 816], 00:43:39.324 | 30.00th=[ 848], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:39.324 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:39.324 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:43:39.324 | 99.99th=[44303] 00:43:39.324 bw ( KiB/s): min= 384, max= 768, per=99.67%, avg=569.60, stdev=178.50, samples=20 00:43:39.324 iops : min= 96, max= 192, avg=142.40, stdev=44.63, samples=20 00:43:39.324 lat (usec) : 750=3.01%, 1000=29.48% 00:43:39.324 lat (msec) : 50=67.51% 00:43:39.324 cpu : usr=92.86%, sys=6.62%, ctx=17, majf=0, minf=1636 00:43:39.324 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:39.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:39.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:39.324 issued rwts: total=1428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:39.324 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:39.324 00:43:39.324 Run status group 0 (all jobs): 00:43:39.324 READ: bw=571KiB/s (585kB/s), 571KiB/s-571KiB/s (585kB/s-585kB/s), io=5712KiB (5849kB), run=10006-10006msec 00:43:39.324 ----------------------------------------------------- 00:43:39.324 Suppressions used: 00:43:39.324 count bytes template 00:43:39.324 1 8 /usr/src/fio/parse.c 00:43:39.324 1 8 libtcmalloc_minimal.so 00:43:39.324 1 904 libcrypto.so 00:43:39.324 ----------------------------------------------------- 00:43:39.324 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.324 00:43:39.324 real 0m12.437s 00:43:39.324 user 0m11.629s 00:43:39.324 sys 0m1.161s 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:39.324 ************************************ 00:43:39.324 END TEST fio_dif_1_default 00:43:39.324 ************************************ 00:43:39.324 03:03:47 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:43:39.324 03:03:47 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:39.324 03:03:47 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:39.324 03:03:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:39.324 ************************************ 00:43:39.324 START TEST fio_dif_1_multi_subsystems 00:43:39.324 ************************************ 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:39.324 bdev_null0 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:39.324 [2024-11-17 03:03:47.622243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:39.324 bdev_null1 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:39.324 { 00:43:39.324 "params": { 00:43:39.324 "name": "Nvme$subsystem", 00:43:39.324 "trtype": "$TEST_TRANSPORT", 00:43:39.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:39.324 "adrfam": "ipv4", 00:43:39.324 "trsvcid": "$NVMF_PORT", 00:43:39.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:39.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:39.324 "hdgst": ${hdgst:-false}, 00:43:39.324 "ddgst": ${ddgst:-false} 00:43:39.324 }, 00:43:39.324 "method": "bdev_nvme_attach_controller" 00:43:39.324 } 00:43:39.324 EOF 00:43:39.324 )") 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:39.324 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:39.325 { 00:43:39.325 "params": { 00:43:39.325 "name": "Nvme$subsystem", 00:43:39.325 "trtype": "$TEST_TRANSPORT", 00:43:39.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:39.325 "adrfam": "ipv4", 00:43:39.325 "trsvcid": "$NVMF_PORT", 00:43:39.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:39.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:39.325 "hdgst": ${hdgst:-false}, 00:43:39.325 "ddgst": ${ddgst:-false} 00:43:39.325 }, 00:43:39.325 "method": "bdev_nvme_attach_controller" 00:43:39.325 } 00:43:39.325 EOF 00:43:39.325 )") 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:39.325 "params": { 00:43:39.325 "name": "Nvme0", 00:43:39.325 "trtype": "tcp", 00:43:39.325 "traddr": "10.0.0.2", 00:43:39.325 "adrfam": "ipv4", 00:43:39.325 "trsvcid": "4420", 00:43:39.325 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:39.325 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:39.325 "hdgst": false, 00:43:39.325 "ddgst": false 00:43:39.325 }, 00:43:39.325 "method": "bdev_nvme_attach_controller" 00:43:39.325 },{ 00:43:39.325 "params": { 00:43:39.325 "name": "Nvme1", 00:43:39.325 "trtype": "tcp", 00:43:39.325 "traddr": "10.0.0.2", 00:43:39.325 "adrfam": "ipv4", 00:43:39.325 "trsvcid": "4420", 00:43:39.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:39.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:39.325 "hdgst": false, 00:43:39.325 "ddgst": false 00:43:39.325 }, 00:43:39.325 "method": "bdev_nvme_attach_controller" 00:43:39.325 }' 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # break 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:39.325 03:03:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:39.583 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:39.583 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:39.583 fio-3.35 00:43:39.583 Starting 2 threads 00:43:51.782 00:43:51.782 filename0: (groupid=0, jobs=1): err= 0: pid=3210203: Sun Nov 17 03:03:59 2024 00:43:51.782 read: IOPS=143, BW=574KiB/s (588kB/s)(5744KiB/10008msec) 00:43:51.782 slat (usec): min=5, max=102, avg=14.45, stdev= 6.32 00:43:51.782 clat (usec): min=702, max=44914, avg=27831.23, stdev=18940.32 00:43:51.782 lat (usec): min=712, max=44961, avg=27845.69, stdev=18940.50 00:43:51.782 clat percentiles (usec): 00:43:51.782 | 1.00th=[ 717], 5.00th=[ 742], 10.00th=[ 758], 20.00th=[ 783], 00:43:51.782 | 30.00th=[ 824], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:51.782 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:51.782 | 99.00th=[41681], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:43:51.782 | 99.99th=[44827] 00:43:51.782 bw ( KiB/s): min= 384, max= 832, per=59.43%, avg=572.80, stdev=188.29, samples=20 00:43:51.782 iops : min= 96, max= 208, avg=143.20, stdev=47.07, samples=20 00:43:51.782 lat (usec) : 750=8.29%, 1000=24.58% 00:43:51.782 lat (msec) : 50=67.13% 00:43:51.782 cpu : usr=94.43%, sys=5.05%, ctx=12, majf=0, minf=1634 00:43:51.782 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:51.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:51.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:51.782 issued rwts: total=1436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:51.782 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:51.782 filename1: (groupid=0, jobs=1): err= 0: pid=3210204: Sun Nov 17 03:03:59 2024 00:43:51.782 read: IOPS=97, BW=389KiB/s (398kB/s)(3888KiB/10004msec) 00:43:51.782 slat (nsec): min=4929, max=46921, avg=13662.90, stdev=5203.11 00:43:51.782 clat (usec): min=1181, max=45832, avg=41126.67, stdev=2626.81 00:43:51.782 lat (usec): min=1198, max=45847, avg=41140.33, stdev=2626.71 00:43:51.782 clat percentiles (usec): 00:43:51.782 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:51.782 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:51.782 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:51.782 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45876], 99.95th=[45876], 00:43:51.782 | 99.99th=[45876] 00:43:51.782 bw ( KiB/s): min= 352, max= 416, per=40.21%, avg=387.20, stdev=14.31, samples=20 00:43:51.782 iops : min= 88, max= 104, avg=96.80, stdev= 3.58, samples=20 00:43:51.782 lat (msec) : 2=0.41%, 50=99.59% 00:43:51.782 cpu : usr=94.53%, sys=4.97%, ctx=16, majf=0, minf=1636 00:43:51.782 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:51.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:51.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:51.782 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:51.782 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:51.782 00:43:51.782 Run status group 0 (all jobs): 00:43:51.782 READ: bw=962KiB/s (986kB/s), 389KiB/s-574KiB/s (398kB/s-588kB/s), io=9632KiB (9863kB), run=10004-10008msec 00:43:51.782 ----------------------------------------------------- 00:43:51.782 Suppressions used: 00:43:51.782 count bytes template 00:43:51.782 2 16 /usr/src/fio/parse.c 00:43:51.782 1 8 libtcmalloc_minimal.so 00:43:51.782 1 904 libcrypto.so 00:43:51.782 ----------------------------------------------------- 00:43:51.782 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:51.782 00:43:51.782 real 0m12.538s 00:43:51.782 user 0m21.277s 00:43:51.782 sys 0m1.498s 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:51.782 03:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:51.782 ************************************ 00:43:51.782 END TEST fio_dif_1_multi_subsystems 00:43:51.782 ************************************ 00:43:51.782 03:04:00 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:43:51.782 03:04:00 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:51.782 03:04:00 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:51.782 03:04:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:51.782 ************************************ 00:43:51.782 START TEST fio_dif_rand_params 00:43:51.782 ************************************ 00:43:51.782 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:43:51.782 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:43:51.782 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:43:51.782 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:43:51.782 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:43:51.782 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:43:51.782 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:43:51.782 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:43:51.782 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:43:51.782 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:51.782 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:51.782 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:51.782 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:51.782 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:51.782 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:51.782 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.782 bdev_null0 00:43:51.782 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:51.782 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.783 [2024-11-17 03:04:00.213192] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:51.783 { 00:43:51.783 "params": { 00:43:51.783 "name": "Nvme$subsystem", 00:43:51.783 "trtype": "$TEST_TRANSPORT", 00:43:51.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:51.783 "adrfam": "ipv4", 00:43:51.783 "trsvcid": "$NVMF_PORT", 00:43:51.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:51.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:51.783 "hdgst": ${hdgst:-false}, 00:43:51.783 "ddgst": ${ddgst:-false} 00:43:51.783 }, 00:43:51.783 "method": "bdev_nvme_attach_controller" 00:43:51.783 } 00:43:51.783 EOF 00:43:51.783 )") 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:51.783 03:04:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:51.783 "params": { 00:43:51.783 "name": "Nvme0", 00:43:51.783 "trtype": "tcp", 00:43:51.783 "traddr": "10.0.0.2", 00:43:51.783 "adrfam": "ipv4", 00:43:51.783 "trsvcid": "4420", 00:43:51.783 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:51.783 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:51.783 "hdgst": false, 00:43:51.783 "ddgst": false 00:43:51.783 }, 00:43:51.783 "method": "bdev_nvme_attach_controller" 00:43:51.783 }' 00:43:52.041 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:52.041 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:52.041 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:43:52.041 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:52.041 03:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:52.300 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:52.300 ... 00:43:52.300 fio-3.35 00:43:52.300 Starting 3 threads 00:43:58.911 00:43:58.911 filename0: (groupid=0, jobs=1): err= 0: pid=3211728: Sun Nov 17 03:04:06 2024 00:43:58.911 read: IOPS=182, BW=22.8MiB/s (23.9MB/s)(115MiB/5047msec) 00:43:58.911 slat (nsec): min=6005, max=41083, avg=19673.90, stdev=2949.48 00:43:58.911 clat (usec): min=7800, max=55996, avg=16386.73, stdev=4326.75 00:43:58.911 lat (usec): min=7820, max=56015, avg=16406.41, stdev=4326.64 00:43:58.911 clat percentiles (usec): 00:43:58.911 | 1.00th=[11207], 5.00th=[12911], 10.00th=[13566], 20.00th=[14222], 00:43:58.911 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15795], 60.00th=[16581], 00:43:58.911 | 70.00th=[17433], 80.00th=[17957], 90.00th=[18744], 95.00th=[19530], 00:43:58.911 | 99.00th=[48497], 99.50th=[49546], 99.90th=[55837], 99.95th=[55837], 00:43:58.911 | 99.99th=[55837] 00:43:58.911 bw ( KiB/s): min=20736, max=25600, per=31.71%, avg=23475.20, stdev=1559.76, samples=10 00:43:58.911 iops : min= 162, max= 200, avg=183.40, stdev=12.19, samples=10 00:43:58.911 lat (msec) : 10=0.54%, 20=96.85%, 50=2.17%, 100=0.43% 00:43:58.911 cpu : usr=93.22%, sys=6.20%, ctx=13, majf=0, minf=1636 00:43:58.911 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:58.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:58.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:58.911 issued rwts: total=920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:58.911 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:58.911 filename0: (groupid=0, jobs=1): err= 0: pid=3211729: Sun Nov 17 03:04:06 2024 00:43:58.911 read: IOPS=199, BW=24.9MiB/s (26.2MB/s)(126MiB/5046msec) 00:43:58.911 slat (nsec): min=5401, max=42071, avg=20885.88, stdev=2414.12 00:43:58.911 clat (usec): min=6351, max=56407, avg=14965.38, stdev=4199.66 00:43:58.911 lat (usec): min=6370, max=56428, avg=14986.27, stdev=4199.34 00:43:58.911 clat percentiles (usec): 00:43:58.911 | 1.00th=[10159], 5.00th=[12518], 10.00th=[13042], 20.00th=[13698], 00:43:58.911 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14615], 60.00th=[15008], 00:43:58.911 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16057], 95.00th=[16712], 00:43:58.911 | 99.00th=[45876], 99.50th=[53216], 99.90th=[56361], 99.95th=[56361], 00:43:58.911 | 99.99th=[56361] 00:43:58.911 bw ( KiB/s): min=23086, max=27136, per=34.72%, avg=25707.00, stdev=1329.37, samples=10 00:43:58.911 iops : min= 180, max= 212, avg=200.80, stdev=10.46, samples=10 00:43:58.911 lat (msec) : 10=0.89%, 20=98.01%, 50=0.40%, 100=0.70% 00:43:58.911 cpu : usr=92.05%, sys=6.78%, ctx=191, majf=0, minf=1637 00:43:58.911 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:58.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:58.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:58.911 issued rwts: total=1007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:58.911 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:58.911 filename0: (groupid=0, jobs=1): err= 0: pid=3211730: Sun Nov 17 03:04:06 2024 00:43:58.911 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(124MiB/5045msec) 00:43:58.911 slat (nsec): min=5463, max=69110, avg=22952.44, stdev=4668.52 00:43:58.911 clat (usec): min=5934, max=54709, avg=15184.06, stdev=3660.48 00:43:58.911 lat (usec): min=5955, max=54730, avg=15207.02, stdev=3660.76 00:43:58.911 clat percentiles (usec): 00:43:58.911 | 1.00th=[ 8848], 5.00th=[12780], 10.00th=[13435], 20.00th=[13960], 00:43:58.911 | 30.00th=[14353], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:43:58.911 | 70.00th=[15664], 80.00th=[16057], 90.00th=[16450], 95.00th=[17171], 00:43:58.911 | 99.00th=[21365], 99.50th=[51643], 99.90th=[54789], 99.95th=[54789], 00:43:58.911 | 99.99th=[54789] 00:43:58.911 bw ( KiB/s): min=23040, max=26880, per=34.23%, avg=25344.00, stdev=1119.14, samples=10 00:43:58.911 iops : min= 180, max= 210, avg=198.00, stdev= 8.74, samples=10 00:43:58.911 lat (msec) : 10=1.92%, 20=96.98%, 50=0.30%, 100=0.81% 00:43:58.911 cpu : usr=86.52%, sys=9.28%, ctx=312, majf=0, minf=1632 00:43:58.911 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:58.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:58.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:58.911 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:58.911 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:58.911 00:43:58.911 Run status group 0 (all jobs): 00:43:58.911 READ: bw=72.3MiB/s (75.8MB/s), 22.8MiB/s-24.9MiB/s (23.9MB/s-26.2MB/s), io=365MiB (383MB), run=5045-5047msec 00:43:59.169 ----------------------------------------------------- 00:43:59.169 Suppressions used: 00:43:59.169 count bytes template 00:43:59.169 5 44 /usr/src/fio/parse.c 00:43:59.169 1 8 libtcmalloc_minimal.so 00:43:59.169 1 904 libcrypto.so 00:43:59.169 ----------------------------------------------------- 00:43:59.169 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:59.169 bdev_null0 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.169 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:59.427 [2024-11-17 03:04:07.637250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:59.427 bdev_null1 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:59.427 bdev_null2 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:59.427 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:59.428 { 00:43:59.428 "params": { 00:43:59.428 "name": "Nvme$subsystem", 00:43:59.428 "trtype": "$TEST_TRANSPORT", 00:43:59.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:59.428 "adrfam": "ipv4", 00:43:59.428 "trsvcid": "$NVMF_PORT", 00:43:59.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:59.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:59.428 "hdgst": ${hdgst:-false}, 00:43:59.428 "ddgst": ${ddgst:-false} 00:43:59.428 }, 00:43:59.428 "method": "bdev_nvme_attach_controller" 00:43:59.428 } 00:43:59.428 EOF 00:43:59.428 )") 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:59.428 { 00:43:59.428 "params": { 00:43:59.428 "name": "Nvme$subsystem", 00:43:59.428 "trtype": "$TEST_TRANSPORT", 00:43:59.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:59.428 "adrfam": "ipv4", 00:43:59.428 "trsvcid": "$NVMF_PORT", 00:43:59.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:59.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:59.428 "hdgst": ${hdgst:-false}, 00:43:59.428 "ddgst": ${ddgst:-false} 00:43:59.428 }, 00:43:59.428 "method": "bdev_nvme_attach_controller" 00:43:59.428 } 00:43:59.428 EOF 00:43:59.428 )") 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:59.428 { 00:43:59.428 "params": { 00:43:59.428 "name": "Nvme$subsystem", 00:43:59.428 "trtype": "$TEST_TRANSPORT", 00:43:59.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:59.428 "adrfam": "ipv4", 00:43:59.428 "trsvcid": "$NVMF_PORT", 00:43:59.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:59.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:59.428 "hdgst": ${hdgst:-false}, 00:43:59.428 "ddgst": ${ddgst:-false} 00:43:59.428 }, 00:43:59.428 "method": "bdev_nvme_attach_controller" 00:43:59.428 } 00:43:59.428 EOF 00:43:59.428 )") 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:59.428 "params": { 00:43:59.428 "name": "Nvme0", 00:43:59.428 "trtype": "tcp", 00:43:59.428 "traddr": "10.0.0.2", 00:43:59.428 "adrfam": "ipv4", 00:43:59.428 "trsvcid": "4420", 00:43:59.428 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:59.428 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:59.428 "hdgst": false, 00:43:59.428 "ddgst": false 00:43:59.428 }, 00:43:59.428 "method": "bdev_nvme_attach_controller" 00:43:59.428 },{ 00:43:59.428 "params": { 00:43:59.428 "name": "Nvme1", 00:43:59.428 "trtype": "tcp", 00:43:59.428 "traddr": "10.0.0.2", 00:43:59.428 "adrfam": "ipv4", 00:43:59.428 "trsvcid": "4420", 00:43:59.428 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:59.428 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:59.428 "hdgst": false, 00:43:59.428 "ddgst": false 00:43:59.428 }, 00:43:59.428 "method": "bdev_nvme_attach_controller" 00:43:59.428 },{ 00:43:59.428 "params": { 00:43:59.428 "name": "Nvme2", 00:43:59.428 "trtype": "tcp", 00:43:59.428 "traddr": "10.0.0.2", 00:43:59.428 "adrfam": "ipv4", 00:43:59.428 "trsvcid": "4420", 00:43:59.428 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:43:59.428 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:43:59.428 "hdgst": false, 00:43:59.428 "ddgst": false 00:43:59.428 }, 00:43:59.428 "method": "bdev_nvme_attach_controller" 00:43:59.428 }' 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:59.428 03:04:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:59.686 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:59.686 ... 00:43:59.686 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:59.686 ... 00:43:59.686 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:59.686 ... 00:43:59.686 fio-3.35 00:43:59.686 Starting 24 threads 00:44:11.896 00:44:11.896 filename0: (groupid=0, jobs=1): err= 0: pid=3212707: Sun Nov 17 03:04:19 2024 00:44:11.896 read: IOPS=335, BW=1343KiB/s (1375kB/s)(13.1MiB/10011msec) 00:44:11.896 slat (nsec): min=12208, max=91696, avg=36155.21, stdev=8659.01 00:44:11.896 clat (msec): min=34, max=253, avg=47.34, stdev=18.74 00:44:11.896 lat (msec): min=35, max=253, avg=47.38, stdev=18.74 00:44:11.896 clat percentiles (msec): 00:44:11.896 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:44:11.896 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.896 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:44:11.896 | 99.00th=[ 155], 99.50th=[ 194], 99.90th=[ 253], 99.95th=[ 253], 00:44:11.896 | 99.99th=[ 253] 00:44:11.896 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1333.89, stdev=277.55, samples=19 00:44:11.896 iops : min= 64, max= 384, avg=333.47, stdev=69.39, samples=19 00:44:11.896 lat (msec) : 50=97.86%, 100=0.24%, 250=1.43%, 500=0.48% 00:44:11.896 cpu : usr=98.18%, sys=1.32%, ctx=21, majf=0, minf=1633 00:44:11.896 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:11.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.896 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.896 issued rwts: total=3360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.896 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.896 filename0: (groupid=0, jobs=1): err= 0: pid=3212708: Sun Nov 17 03:04:19 2024 00:44:11.896 read: IOPS=342, BW=1372KiB/s (1405kB/s)(13.4MiB/10032msec) 00:44:11.896 slat (usec): min=6, max=101, avg=30.65, stdev=14.41 00:44:11.896 clat (msec): min=2, max=221, avg=46.40, stdev=14.96 00:44:11.896 lat (msec): min=2, max=221, avg=46.43, stdev=14.96 00:44:11.896 clat percentiles (msec): 00:44:11.896 | 1.00th=[ 8], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:44:11.896 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.896 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 47], 00:44:11.896 | 99.00th=[ 146], 99.50th=[ 163], 99.90th=[ 165], 99.95th=[ 222], 00:44:11.896 | 99.99th=[ 222] 00:44:11.896 bw ( KiB/s): min= 896, max= 1536, per=4.24%, avg=1369.60, stdev=144.46, samples=20 00:44:11.896 iops : min= 224, max= 384, avg=342.40, stdev=36.11, samples=20 00:44:11.896 lat (msec) : 4=0.47%, 10=0.87%, 20=0.47%, 50=95.52%, 100=1.28% 00:44:11.896 lat (msec) : 250=1.40% 00:44:11.896 cpu : usr=97.53%, sys=1.69%, ctx=78, majf=0, minf=1634 00:44:11.896 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:44:11.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.896 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.896 issued rwts: total=3440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.896 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.896 filename0: (groupid=0, jobs=1): err= 0: pid=3212709: Sun Nov 17 03:04:19 2024 00:44:11.896 read: IOPS=335, BW=1344KiB/s (1376kB/s)(13.1MiB/10002msec) 00:44:11.896 slat (usec): min=15, max=108, avg=59.86, stdev=12.55 00:44:11.896 clat (msec): min=31, max=222, avg=47.08, stdev=17.62 00:44:11.896 lat (msec): min=31, max=222, avg=47.14, stdev=17.62 00:44:11.896 clat percentiles (msec): 00:44:11.896 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:11.896 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.896 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:44:11.896 | 99.00th=[ 153], 99.50th=[ 192], 99.90th=[ 222], 99.95th=[ 222], 00:44:11.896 | 99.99th=[ 224] 00:44:11.896 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1333.89, stdev=277.55, samples=19 00:44:11.896 iops : min= 64, max= 384, avg=333.47, stdev=69.39, samples=19 00:44:11.896 lat (msec) : 50=97.98%, 100=0.12%, 250=1.90% 00:44:11.896 cpu : usr=97.67%, sys=1.45%, ctx=84, majf=0, minf=1631 00:44:11.896 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:11.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.896 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.896 issued rwts: total=3360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.896 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.896 filename0: (groupid=0, jobs=1): err= 0: pid=3212710: Sun Nov 17 03:04:19 2024 00:44:11.896 read: IOPS=335, BW=1344KiB/s (1376kB/s)(13.1MiB/10003msec) 00:44:11.896 slat (usec): min=4, max=139, avg=68.96, stdev=13.85 00:44:11.896 clat (msec): min=36, max=168, avg=46.99, stdev=15.21 00:44:11.896 lat (msec): min=36, max=168, avg=47.06, stdev=15.21 00:44:11.896 clat percentiles (msec): 00:44:11.896 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:44:11.896 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.896 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:44:11.896 | 99.00th=[ 155], 99.50th=[ 167], 99.90th=[ 169], 99.95th=[ 169], 00:44:11.896 | 99.99th=[ 169] 00:44:11.896 bw ( KiB/s): min= 384, max= 1536, per=4.15%, avg=1340.63, stdev=250.13, samples=19 00:44:11.896 iops : min= 96, max= 384, avg=335.16, stdev=62.53, samples=19 00:44:11.896 lat (msec) : 50=97.50%, 100=0.12%, 250=2.38% 00:44:11.896 cpu : usr=97.89%, sys=1.49%, ctx=23, majf=0, minf=1635 00:44:11.896 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:11.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.896 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.896 issued rwts: total=3360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.896 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.896 filename0: (groupid=0, jobs=1): err= 0: pid=3212711: Sun Nov 17 03:04:19 2024 00:44:11.896 read: IOPS=337, BW=1349KiB/s (1381kB/s)(13.2MiB/10010msec) 00:44:11.896 slat (usec): min=8, max=131, avg=33.34, stdev=17.77 00:44:11.896 clat (msec): min=36, max=168, avg=47.16, stdev=13.86 00:44:11.896 lat (msec): min=36, max=168, avg=47.19, stdev=13.86 00:44:11.896 clat percentiles (msec): 00:44:11.896 | 1.00th=[ 44], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:44:11.896 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.896 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:44:11.896 | 99.00th=[ 136], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 169], 00:44:11.896 | 99.99th=[ 169] 00:44:11.896 bw ( KiB/s): min= 512, max= 1536, per=4.15%, avg=1340.63, stdev=223.21, samples=19 00:44:11.896 iops : min= 128, max= 384, avg=335.16, stdev=55.80, samples=19 00:44:11.896 lat (msec) : 50=96.92%, 100=1.18%, 250=1.90% 00:44:11.896 cpu : usr=95.73%, sys=2.42%, ctx=288, majf=0, minf=1634 00:44:11.896 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:44:11.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.896 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.896 issued rwts: total=3376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.896 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.896 filename0: (groupid=0, jobs=1): err= 0: pid=3212712: Sun Nov 17 03:04:19 2024 00:44:11.896 read: IOPS=336, BW=1347KiB/s (1379kB/s)(13.2MiB/10028msec) 00:44:11.896 slat (usec): min=8, max=115, avg=44.10, stdev=18.40 00:44:11.896 clat (msec): min=23, max=225, avg=47.12, stdev=15.70 00:44:11.896 lat (msec): min=23, max=225, avg=47.16, stdev=15.70 00:44:11.896 clat percentiles (msec): 00:44:11.896 | 1.00th=[ 44], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:44:11.897 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.897 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:44:11.897 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 165], 99.95th=[ 226], 00:44:11.897 | 99.99th=[ 226] 00:44:11.897 bw ( KiB/s): min= 384, max= 1536, per=4.16%, avg=1343.00, stdev=243.69, samples=20 00:44:11.897 iops : min= 96, max= 384, avg=335.75, stdev=60.92, samples=20 00:44:11.897 lat (msec) : 50=97.45%, 100=0.71%, 250=1.84% 00:44:11.897 cpu : usr=96.30%, sys=2.17%, ctx=187, majf=0, minf=1633 00:44:11.897 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:11.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.897 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.897 issued rwts: total=3376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.897 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.897 filename0: (groupid=0, jobs=1): err= 0: pid=3212713: Sun Nov 17 03:04:19 2024 00:44:11.897 read: IOPS=337, BW=1350KiB/s (1382kB/s)(13.2MiB/10004msec) 00:44:11.897 slat (nsec): min=9234, max=98106, avg=43557.19, stdev=13028.41 00:44:11.897 clat (msec): min=37, max=168, avg=47.04, stdev=13.86 00:44:11.897 lat (msec): min=37, max=168, avg=47.08, stdev=13.86 00:44:11.897 clat percentiles (msec): 00:44:11.897 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:44:11.897 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.897 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:44:11.897 | 99.00th=[ 136], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 169], 00:44:11.897 | 99.99th=[ 169] 00:44:11.897 bw ( KiB/s): min= 512, max= 1536, per=4.17%, avg=1347.37, stdev=223.21, samples=19 00:44:11.897 iops : min= 128, max= 384, avg=336.84, stdev=55.80, samples=19 00:44:11.897 lat (msec) : 50=97.10%, 100=1.01%, 250=1.90% 00:44:11.897 cpu : usr=97.62%, sys=1.64%, ctx=35, majf=0, minf=1634 00:44:11.897 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:11.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.897 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.897 issued rwts: total=3376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.897 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.897 filename0: (groupid=0, jobs=1): err= 0: pid=3212714: Sun Nov 17 03:04:19 2024 00:44:11.897 read: IOPS=335, BW=1343KiB/s (1376kB/s)(13.1MiB/10005msec) 00:44:11.897 slat (nsec): min=12296, max=83668, avg=40431.02, stdev=13248.59 00:44:11.897 clat (msec): min=34, max=248, avg=47.26, stdev=18.46 00:44:11.897 lat (msec): min=34, max=248, avg=47.30, stdev=18.46 00:44:11.897 clat percentiles (msec): 00:44:11.897 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:44:11.897 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.897 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:44:11.897 | 99.00th=[ 155], 99.50th=[ 169], 99.90th=[ 249], 99.95th=[ 249], 00:44:11.897 | 99.99th=[ 249] 00:44:11.897 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1333.89, stdev=277.55, samples=19 00:44:11.897 iops : min= 64, max= 384, avg=333.47, stdev=69.39, samples=19 00:44:11.897 lat (msec) : 50=98.04%, 100=0.06%, 250=1.90% 00:44:11.897 cpu : usr=98.15%, sys=1.33%, ctx=19, majf=0, minf=1631 00:44:11.897 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:11.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.897 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.897 issued rwts: total=3360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.897 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.897 filename1: (groupid=0, jobs=1): err= 0: pid=3212715: Sun Nov 17 03:04:19 2024 00:44:11.897 read: IOPS=335, BW=1343KiB/s (1375kB/s)(13.1MiB/10010msec) 00:44:11.897 slat (usec): min=11, max=112, avg=41.59, stdev=12.03 00:44:11.897 clat (msec): min=34, max=285, avg=47.28, stdev=18.93 00:44:11.897 lat (msec): min=34, max=285, avg=47.32, stdev=18.93 00:44:11.897 clat percentiles (msec): 00:44:11.897 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:44:11.897 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.897 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:44:11.897 | 99.00th=[ 155], 99.50th=[ 203], 99.90th=[ 253], 99.95th=[ 288], 00:44:11.897 | 99.99th=[ 288] 00:44:11.897 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1333.89, stdev=277.55, samples=19 00:44:11.897 iops : min= 64, max= 384, avg=333.47, stdev=69.39, samples=19 00:44:11.897 lat (msec) : 50=98.04%, 100=0.12%, 250=1.37%, 500=0.48% 00:44:11.897 cpu : usr=98.27%, sys=1.22%, ctx=22, majf=0, minf=1631 00:44:11.897 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:11.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.897 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.897 issued rwts: total=3360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.897 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.897 filename1: (groupid=0, jobs=1): err= 0: pid=3212716: Sun Nov 17 03:04:19 2024 00:44:11.897 read: IOPS=338, BW=1354KiB/s (1386kB/s)(13.2MiB/10021msec) 00:44:11.897 slat (nsec): min=6768, max=88649, avg=29090.56, stdev=9245.50 00:44:11.897 clat (msec): min=24, max=167, avg=47.00, stdev=12.12 00:44:11.897 lat (msec): min=24, max=167, avg=47.03, stdev=12.12 00:44:11.897 clat percentiles (msec): 00:44:11.897 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:44:11.897 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.897 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 47], 00:44:11.897 | 99.00th=[ 122], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 167], 00:44:11.897 | 99.99th=[ 167] 00:44:11.897 bw ( KiB/s): min= 512, max= 1536, per=4.18%, avg=1350.40, stdev=217.68, samples=20 00:44:11.897 iops : min= 128, max= 384, avg=337.60, stdev=54.42, samples=20 00:44:11.897 lat (msec) : 50=96.76%, 100=1.77%, 250=1.47% 00:44:11.897 cpu : usr=98.21%, sys=1.26%, ctx=21, majf=0, minf=1634 00:44:11.897 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:11.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.897 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.897 issued rwts: total=3392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.897 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.897 filename1: (groupid=0, jobs=1): err= 0: pid=3212717: Sun Nov 17 03:04:19 2024 00:44:11.897 read: IOPS=358, BW=1433KiB/s (1468kB/s)(14.0MiB/10002msec) 00:44:11.897 slat (usec): min=11, max=148, avg=30.42, stdev=17.82 00:44:11.897 clat (msec): min=19, max=359, avg=44.42, stdev=24.46 00:44:11.897 lat (msec): min=19, max=359, avg=44.45, stdev=24.47 00:44:11.897 clat percentiles (msec): 00:44:11.897 | 1.00th=[ 27], 5.00th=[ 28], 10.00th=[ 31], 20.00th=[ 35], 00:44:11.897 | 30.00th=[ 42], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.897 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 53], 00:44:11.897 | 99.00th=[ 144], 99.50th=[ 159], 99.90th=[ 359], 99.95th=[ 359], 00:44:11.897 | 99.99th=[ 359] 00:44:11.897 bw ( KiB/s): min= 128, max= 1808, per=4.42%, avg=1428.21, stdev=344.85, samples=19 00:44:11.897 iops : min= 32, max= 452, avg=357.05, stdev=86.21, samples=19 00:44:11.897 lat (msec) : 20=0.11%, 50=93.69%, 100=4.85%, 250=0.89%, 500=0.45% 00:44:11.897 cpu : usr=97.54%, sys=1.77%, ctx=67, majf=0, minf=1633 00:44:11.897 IO depths : 1=2.8%, 2=5.7%, 4=13.9%, 8=66.7%, 16=10.8%, 32=0.0%, >=64=0.0% 00:44:11.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.897 complete : 0=0.0%, 4=91.2%, 8=4.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.897 issued rwts: total=3584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.897 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.897 filename1: (groupid=0, jobs=1): err= 0: pid=3212718: Sun Nov 17 03:04:19 2024 00:44:11.897 read: IOPS=335, BW=1342KiB/s (1375kB/s)(13.1MiB/10012msec) 00:44:11.897 slat (nsec): min=11068, max=92699, avg=33101.68, stdev=15166.09 00:44:11.897 clat (msec): min=23, max=360, avg=47.34, stdev=20.43 00:44:11.897 lat (msec): min=23, max=360, avg=47.37, stdev=20.43 00:44:11.897 clat percentiles (msec): 00:44:11.897 | 1.00th=[ 44], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:44:11.897 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.897 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:44:11.897 | 99.00th=[ 163], 99.50th=[ 165], 99.90th=[ 279], 99.95th=[ 359], 00:44:11.897 | 99.99th=[ 363] 00:44:11.897 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1333.89, stdev=277.55, samples=19 00:44:11.897 iops : min= 64, max= 384, avg=333.47, stdev=69.39, samples=19 00:44:11.897 lat (msec) : 50=98.04%, 100=0.60%, 250=0.89%, 500=0.48% 00:44:11.897 cpu : usr=98.32%, sys=1.03%, ctx=40, majf=0, minf=1633 00:44:11.897 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:11.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.897 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.897 issued rwts: total=3360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.897 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.897 filename1: (groupid=0, jobs=1): err= 0: pid=3212719: Sun Nov 17 03:04:19 2024 00:44:11.897 read: IOPS=335, BW=1343KiB/s (1376kB/s)(13.1MiB/10005msec) 00:44:11.897 slat (nsec): min=5035, max=99360, avg=38470.74, stdev=12518.91 00:44:11.897 clat (msec): min=43, max=168, avg=47.31, stdev=15.21 00:44:11.897 lat (msec): min=43, max=168, avg=47.35, stdev=15.21 00:44:11.897 clat percentiles (msec): 00:44:11.897 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:44:11.897 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.897 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:44:11.897 | 99.00th=[ 148], 99.50th=[ 163], 99.90th=[ 169], 99.95th=[ 169], 00:44:11.897 | 99.99th=[ 169] 00:44:11.897 bw ( KiB/s): min= 384, max= 1536, per=4.15%, avg=1340.63, stdev=250.13, samples=19 00:44:11.897 iops : min= 96, max= 384, avg=335.16, stdev=62.53, samples=19 00:44:11.897 lat (msec) : 50=97.62%, 250=2.38% 00:44:11.897 cpu : usr=97.97%, sys=1.34%, ctx=84, majf=0, minf=1633 00:44:11.897 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:11.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.897 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.898 issued rwts: total=3360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.898 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.898 filename1: (groupid=0, jobs=1): err= 0: pid=3212720: Sun Nov 17 03:04:19 2024 00:44:11.898 read: IOPS=336, BW=1347KiB/s (1380kB/s)(13.2MiB/10022msec) 00:44:11.898 slat (usec): min=6, max=144, avg=70.21, stdev=18.97 00:44:11.898 clat (msec): min=43, max=194, avg=46.84, stdev=14.06 00:44:11.898 lat (msec): min=43, max=194, avg=46.91, stdev=14.06 00:44:11.898 clat percentiles (msec): 00:44:11.898 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:44:11.898 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.898 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:44:11.898 | 99.00th=[ 136], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 194], 00:44:11.898 | 99.99th=[ 194] 00:44:11.898 bw ( KiB/s): min= 512, max= 1536, per=4.16%, avg=1344.00, stdev=229.35, samples=20 00:44:11.898 iops : min= 128, max= 384, avg=336.00, stdev=57.34, samples=20 00:44:11.898 lat (msec) : 50=97.16%, 100=0.95%, 250=1.90% 00:44:11.898 cpu : usr=96.94%, sys=1.93%, ctx=145, majf=0, minf=1634 00:44:11.898 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:11.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.898 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.898 issued rwts: total=3376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.898 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.898 filename1: (groupid=0, jobs=1): err= 0: pid=3212721: Sun Nov 17 03:04:19 2024 00:44:11.898 read: IOPS=335, BW=1344KiB/s (1376kB/s)(13.1MiB/10002msec) 00:44:11.898 slat (nsec): min=11421, max=86032, avg=35616.06, stdev=16805.60 00:44:11.898 clat (msec): min=31, max=221, avg=47.30, stdev=17.63 00:44:11.898 lat (msec): min=31, max=221, avg=47.33, stdev=17.63 00:44:11.898 clat percentiles (msec): 00:44:11.898 | 1.00th=[ 44], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:44:11.898 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.898 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:44:11.898 | 99.00th=[ 153], 99.50th=[ 207], 99.90th=[ 222], 99.95th=[ 222], 00:44:11.898 | 99.99th=[ 222] 00:44:11.898 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1333.89, stdev=277.55, samples=19 00:44:11.898 iops : min= 64, max= 384, avg=333.47, stdev=69.39, samples=19 00:44:11.898 lat (msec) : 50=98.04%, 100=0.12%, 250=1.85% 00:44:11.898 cpu : usr=97.14%, sys=1.80%, ctx=154, majf=0, minf=1635 00:44:11.898 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:11.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.898 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.898 issued rwts: total=3360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.898 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.898 filename1: (groupid=0, jobs=1): err= 0: pid=3212722: Sun Nov 17 03:04:19 2024 00:44:11.898 read: IOPS=335, BW=1343KiB/s (1375kB/s)(13.1MiB/10007msec) 00:44:11.898 slat (usec): min=13, max=104, avg=60.54, stdev= 9.65 00:44:11.898 clat (msec): min=24, max=279, avg=47.10, stdev=20.16 00:44:11.898 lat (msec): min=24, max=279, avg=47.16, stdev=20.16 00:44:11.898 clat percentiles (msec): 00:44:11.898 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:11.898 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.898 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:44:11.898 | 99.00th=[ 163], 99.50th=[ 239], 99.90th=[ 279], 99.95th=[ 279], 00:44:11.898 | 99.99th=[ 279] 00:44:11.898 bw ( KiB/s): min= 240, max= 1536, per=4.12%, avg=1333.89, stdev=280.05, samples=19 00:44:11.898 iops : min= 60, max= 384, avg=333.47, stdev=70.01, samples=19 00:44:11.898 lat (msec) : 50=98.04%, 100=0.54%, 250=0.95%, 500=0.48% 00:44:11.898 cpu : usr=96.16%, sys=2.22%, ctx=136, majf=0, minf=1631 00:44:11.898 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:44:11.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.898 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.898 issued rwts: total=3360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.898 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.898 filename2: (groupid=0, jobs=1): err= 0: pid=3212723: Sun Nov 17 03:04:19 2024 00:44:11.898 read: IOPS=335, BW=1343KiB/s (1375kB/s)(13.1MiB/10010msec) 00:44:11.898 slat (nsec): min=11142, max=73174, avg=24037.58, stdev=9739.30 00:44:11.898 clat (msec): min=22, max=359, avg=47.44, stdev=24.15 00:44:11.898 lat (msec): min=22, max=359, avg=47.46, stdev=24.15 00:44:11.898 clat percentiles (msec): 00:44:11.898 | 1.00th=[ 32], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:44:11.898 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.898 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 47], 00:44:11.898 | 99.00th=[ 144], 99.50th=[ 209], 99.90th=[ 359], 99.95th=[ 359], 00:44:11.898 | 99.99th=[ 359] 00:44:11.898 bw ( KiB/s): min= 128, max= 1536, per=4.12%, avg=1333.89, stdev=299.63, samples=19 00:44:11.898 iops : min= 32, max= 384, avg=333.47, stdev=74.91, samples=19 00:44:11.898 lat (msec) : 50=97.92%, 100=0.71%, 250=0.89%, 500=0.48% 00:44:11.898 cpu : usr=98.35%, sys=1.14%, ctx=14, majf=0, minf=1633 00:44:11.898 IO depths : 1=5.6%, 2=11.8%, 4=24.9%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:44:11.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.898 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.898 issued rwts: total=3360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.898 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.898 filename2: (groupid=0, jobs=1): err= 0: pid=3212724: Sun Nov 17 03:04:19 2024 00:44:11.898 read: IOPS=335, BW=1343KiB/s (1375kB/s)(13.1MiB/10007msec) 00:44:11.898 slat (usec): min=11, max=107, avg=40.20, stdev=12.06 00:44:11.898 clat (msec): min=35, max=282, avg=47.27, stdev=18.76 00:44:11.898 lat (msec): min=35, max=282, avg=47.31, stdev=18.76 00:44:11.898 clat percentiles (msec): 00:44:11.898 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:44:11.898 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.898 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:44:11.898 | 99.00th=[ 155], 99.50th=[ 203], 99.90th=[ 249], 99.95th=[ 284], 00:44:11.898 | 99.99th=[ 284] 00:44:11.898 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1333.89, stdev=277.55, samples=19 00:44:11.898 iops : min= 64, max= 384, avg=333.47, stdev=69.39, samples=19 00:44:11.898 lat (msec) : 50=98.04%, 100=0.12%, 250=1.79%, 500=0.06% 00:44:11.898 cpu : usr=96.75%, sys=2.07%, ctx=240, majf=0, minf=1633 00:44:11.898 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:11.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.898 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.898 issued rwts: total=3360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.898 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.898 filename2: (groupid=0, jobs=1): err= 0: pid=3212725: Sun Nov 17 03:04:19 2024 00:44:11.898 read: IOPS=335, BW=1342KiB/s (1374kB/s)(13.1MiB/10017msec) 00:44:11.898 slat (nsec): min=8255, max=83508, avg=36344.29, stdev=11632.51 00:44:11.898 clat (msec): min=35, max=259, avg=47.38, stdev=19.09 00:44:11.898 lat (msec): min=35, max=259, avg=47.41, stdev=19.09 00:44:11.898 clat percentiles (msec): 00:44:11.898 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:44:11.898 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.898 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 47], 00:44:11.898 | 99.00th=[ 155], 99.50th=[ 194], 99.90th=[ 259], 99.95th=[ 259], 00:44:11.898 | 99.99th=[ 259] 00:44:11.898 bw ( KiB/s): min= 256, max= 1536, per=4.14%, avg=1337.60, stdev=270.65, samples=20 00:44:11.898 iops : min= 64, max= 384, avg=334.40, stdev=67.66, samples=20 00:44:11.898 lat (msec) : 50=97.74%, 100=0.36%, 250=1.43%, 500=0.48% 00:44:11.898 cpu : usr=97.49%, sys=1.65%, ctx=78, majf=0, minf=1635 00:44:11.898 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:44:11.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.898 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.898 issued rwts: total=3360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.898 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.898 filename2: (groupid=0, jobs=1): err= 0: pid=3212726: Sun Nov 17 03:04:19 2024 00:44:11.898 read: IOPS=336, BW=1346KiB/s (1378kB/s)(13.2MiB/10036msec) 00:44:11.898 slat (usec): min=7, max=128, avg=47.41, stdev=16.84 00:44:11.898 clat (msec): min=35, max=203, avg=47.16, stdev=15.20 00:44:11.898 lat (msec): min=35, max=203, avg=47.21, stdev=15.20 00:44:11.898 clat percentiles (msec): 00:44:11.898 | 1.00th=[ 44], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:44:11.898 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.898 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:44:11.898 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 203], 00:44:11.898 | 99.99th=[ 205] 00:44:11.898 bw ( KiB/s): min= 383, max= 1536, per=4.15%, avg=1341.50, stdev=243.70, samples=20 00:44:11.898 iops : min= 95, max= 384, avg=335.30, stdev=61.08, samples=20 00:44:11.898 lat (msec) : 50=97.63%, 100=0.06%, 250=2.31% 00:44:11.898 cpu : usr=97.66%, sys=1.60%, ctx=33, majf=0, minf=1634 00:44:11.898 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:11.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.898 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.898 issued rwts: total=3376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.898 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.898 filename2: (groupid=0, jobs=1): err= 0: pid=3212727: Sun Nov 17 03:04:19 2024 00:44:11.898 read: IOPS=335, BW=1343KiB/s (1376kB/s)(13.1MiB/10004msec) 00:44:11.898 slat (nsec): min=6379, max=92419, avg=32124.08, stdev=9895.22 00:44:11.898 clat (msec): min=43, max=168, avg=47.36, stdev=15.19 00:44:11.898 lat (msec): min=43, max=168, avg=47.39, stdev=15.19 00:44:11.898 clat percentiles (msec): 00:44:11.898 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:44:11.898 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.898 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:44:11.898 | 99.00th=[ 148], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 169], 00:44:11.898 | 99.99th=[ 169] 00:44:11.899 bw ( KiB/s): min= 384, max= 1536, per=4.15%, avg=1340.63, stdev=250.13, samples=19 00:44:11.899 iops : min= 96, max= 384, avg=335.16, stdev=62.53, samples=19 00:44:11.899 lat (msec) : 50=97.62%, 250=2.38% 00:44:11.899 cpu : usr=96.96%, sys=1.85%, ctx=176, majf=0, minf=1632 00:44:11.899 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:11.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.899 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.899 issued rwts: total=3360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.899 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.899 filename2: (groupid=0, jobs=1): err= 0: pid=3212728: Sun Nov 17 03:04:19 2024 00:44:11.899 read: IOPS=335, BW=1343KiB/s (1375kB/s)(13.1MiB/10009msec) 00:44:11.899 slat (nsec): min=9378, max=85941, avg=29186.42, stdev=10301.59 00:44:11.899 clat (msec): min=23, max=281, avg=47.37, stdev=20.31 00:44:11.899 lat (msec): min=23, max=281, avg=47.40, stdev=20.31 00:44:11.899 clat percentiles (msec): 00:44:11.899 | 1.00th=[ 44], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:44:11.899 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.899 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 47], 00:44:11.899 | 99.00th=[ 163], 99.50th=[ 241], 99.90th=[ 284], 99.95th=[ 284], 00:44:11.899 | 99.99th=[ 284] 00:44:11.899 bw ( KiB/s): min= 240, max= 1536, per=4.12%, avg=1333.89, stdev=280.05, samples=19 00:44:11.899 iops : min= 60, max= 384, avg=333.47, stdev=70.01, samples=19 00:44:11.899 lat (msec) : 50=97.92%, 100=0.65%, 250=0.95%, 500=0.48% 00:44:11.899 cpu : usr=98.14%, sys=1.36%, ctx=21, majf=0, minf=1635 00:44:11.899 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:44:11.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.899 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.899 issued rwts: total=3360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.899 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.899 filename2: (groupid=0, jobs=1): err= 0: pid=3212729: Sun Nov 17 03:04:19 2024 00:44:11.899 read: IOPS=338, BW=1353KiB/s (1386kB/s)(13.2MiB/10025msec) 00:44:11.899 slat (nsec): min=6493, max=69454, avg=30625.87, stdev=9544.24 00:44:11.899 clat (msec): min=43, max=164, avg=47.01, stdev=12.66 00:44:11.899 lat (msec): min=43, max=164, avg=47.04, stdev=12.66 00:44:11.899 clat percentiles (msec): 00:44:11.899 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:44:11.899 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.899 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 47], 00:44:11.899 | 99.00th=[ 106], 99.50th=[ 163], 99.90th=[ 165], 99.95th=[ 165], 00:44:11.899 | 99.99th=[ 165] 00:44:11.899 bw ( KiB/s): min= 624, max= 1536, per=4.18%, avg=1350.40, stdev=206.56, samples=20 00:44:11.899 iops : min= 156, max= 384, avg=337.60, stdev=51.64, samples=20 00:44:11.899 lat (msec) : 50=96.70%, 100=2.24%, 250=1.06% 00:44:11.899 cpu : usr=98.37%, sys=1.14%, ctx=17, majf=0, minf=1632 00:44:11.899 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:11.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.899 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.899 issued rwts: total=3392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.899 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.899 filename2: (groupid=0, jobs=1): err= 0: pid=3212730: Sun Nov 17 03:04:19 2024 00:44:11.899 read: IOPS=335, BW=1344KiB/s (1376kB/s)(13.1MiB/10002msec) 00:44:11.899 slat (usec): min=12, max=102, avg=42.20, stdev=18.33 00:44:11.899 clat (msec): min=31, max=222, avg=47.25, stdev=17.56 00:44:11.899 lat (msec): min=31, max=222, avg=47.29, stdev=17.56 00:44:11.899 clat percentiles (msec): 00:44:11.899 | 1.00th=[ 44], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:44:11.899 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:11.899 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:44:11.899 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 222], 99.95th=[ 222], 00:44:11.899 | 99.99th=[ 222] 00:44:11.899 bw ( KiB/s): min= 256, max= 1536, per=4.12%, avg=1333.89, stdev=277.55, samples=19 00:44:11.899 iops : min= 64, max= 384, avg=333.47, stdev=69.39, samples=19 00:44:11.899 lat (msec) : 50=97.92%, 100=0.18%, 250=1.90% 00:44:11.899 cpu : usr=97.72%, sys=1.55%, ctx=61, majf=0, minf=1635 00:44:11.899 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:11.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.899 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.899 issued rwts: total=3360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.899 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:11.899 00:44:11.899 Run status group 0 (all jobs): 00:44:11.899 READ: bw=31.6MiB/s (33.1MB/s), 1342KiB/s-1433KiB/s (1374kB/s-1468kB/s), io=317MiB (332MB), run=10002-10036msec 00:44:12.157 ----------------------------------------------------- 00:44:12.157 Suppressions used: 00:44:12.157 count bytes template 00:44:12.157 45 402 /usr/src/fio/parse.c 00:44:12.157 1 8 libtcmalloc_minimal.so 00:44:12.157 1 904 libcrypto.so 00:44:12.157 ----------------------------------------------------- 00:44:12.157 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.416 bdev_null0 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.416 [2024-11-17 03:04:20.700614] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.416 bdev_null1 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:44:12.416 03:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:12.417 { 00:44:12.417 "params": { 00:44:12.417 "name": "Nvme$subsystem", 00:44:12.417 "trtype": "$TEST_TRANSPORT", 00:44:12.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:12.417 "adrfam": "ipv4", 00:44:12.417 "trsvcid": "$NVMF_PORT", 00:44:12.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:12.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:12.417 "hdgst": ${hdgst:-false}, 00:44:12.417 "ddgst": ${ddgst:-false} 00:44:12.417 }, 00:44:12.417 "method": "bdev_nvme_attach_controller" 00:44:12.417 } 00:44:12.417 EOF 00:44:12.417 )") 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:12.417 { 00:44:12.417 "params": { 00:44:12.417 "name": "Nvme$subsystem", 00:44:12.417 "trtype": "$TEST_TRANSPORT", 00:44:12.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:12.417 "adrfam": "ipv4", 00:44:12.417 "trsvcid": "$NVMF_PORT", 00:44:12.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:12.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:12.417 "hdgst": ${hdgst:-false}, 00:44:12.417 "ddgst": ${ddgst:-false} 00:44:12.417 }, 00:44:12.417 "method": "bdev_nvme_attach_controller" 00:44:12.417 } 00:44:12.417 EOF 00:44:12.417 )") 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:12.417 "params": { 00:44:12.417 "name": "Nvme0", 00:44:12.417 "trtype": "tcp", 00:44:12.417 "traddr": "10.0.0.2", 00:44:12.417 "adrfam": "ipv4", 00:44:12.417 "trsvcid": "4420", 00:44:12.417 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:12.417 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:12.417 "hdgst": false, 00:44:12.417 "ddgst": false 00:44:12.417 }, 00:44:12.417 "method": "bdev_nvme_attach_controller" 00:44:12.417 },{ 00:44:12.417 "params": { 00:44:12.417 "name": "Nvme1", 00:44:12.417 "trtype": "tcp", 00:44:12.417 "traddr": "10.0.0.2", 00:44:12.417 "adrfam": "ipv4", 00:44:12.417 "trsvcid": "4420", 00:44:12.417 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:12.417 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:12.417 "hdgst": false, 00:44:12.417 "ddgst": false 00:44:12.417 }, 00:44:12.417 "method": "bdev_nvme_attach_controller" 00:44:12.417 }' 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:12.417 03:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:12.674 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:12.674 ... 00:44:12.675 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:12.675 ... 00:44:12.675 fio-3.35 00:44:12.675 Starting 4 threads 00:44:19.231 00:44:19.231 filename0: (groupid=0, jobs=1): err= 0: pid=3214235: Sun Nov 17 03:04:27 2024 00:44:19.231 read: IOPS=1448, BW=11.3MiB/s (11.9MB/s)(56.6MiB/5004msec) 00:44:19.231 slat (nsec): min=7125, max=50731, avg=14481.21, stdev=5266.29 00:44:19.231 clat (usec): min=1906, max=9075, avg=5462.21, stdev=644.40 00:44:19.231 lat (usec): min=1917, max=9086, avg=5476.69, stdev=644.29 00:44:19.231 clat percentiles (usec): 00:44:19.231 | 1.00th=[ 3425], 5.00th=[ 4359], 10.00th=[ 4686], 20.00th=[ 5014], 00:44:19.231 | 30.00th=[ 5211], 40.00th=[ 5407], 50.00th=[ 5538], 60.00th=[ 5669], 00:44:19.231 | 70.00th=[ 5800], 80.00th=[ 5932], 90.00th=[ 6063], 95.00th=[ 6390], 00:44:19.231 | 99.00th=[ 6980], 99.50th=[ 7242], 99.90th=[ 7963], 99.95th=[ 8356], 00:44:19.231 | 99.99th=[ 9110] 00:44:19.231 bw ( KiB/s): min=11072, max=12352, per=26.51%, avg=11596.80, stdev=449.32, samples=10 00:44:19.231 iops : min= 1384, max= 1544, avg=1449.60, stdev=56.16, samples=10 00:44:19.231 lat (msec) : 2=0.06%, 4=2.58%, 10=97.37% 00:44:19.231 cpu : usr=93.10%, sys=6.30%, ctx=6, majf=0, minf=1634 00:44:19.231 IO depths : 1=1.7%, 2=20.1%, 4=54.0%, 8=24.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:19.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:19.231 complete : 0=0.0%, 4=90.7%, 8=9.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:19.231 issued rwts: total=7249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:19.231 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:19.231 filename0: (groupid=0, jobs=1): err= 0: pid=3214236: Sun Nov 17 03:04:27 2024 00:44:19.231 read: IOPS=1352, BW=10.6MiB/s (11.1MB/s)(52.9MiB/5003msec) 00:44:19.231 slat (nsec): min=6613, max=49130, avg=16317.22, stdev=5442.99 00:44:19.231 clat (usec): min=1111, max=10853, avg=5854.15, stdev=943.00 00:44:19.231 lat (usec): min=1130, max=10864, avg=5870.47, stdev=942.80 00:44:19.231 clat percentiles (usec): 00:44:19.231 | 1.00th=[ 3523], 5.00th=[ 4686], 10.00th=[ 5014], 20.00th=[ 5342], 00:44:19.231 | 30.00th=[ 5473], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5866], 00:44:19.232 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6915], 95.00th=[ 7701], 00:44:19.232 | 99.00th=[ 9372], 99.50th=[ 9765], 99.90th=[10421], 99.95th=[10421], 00:44:19.232 | 99.99th=[10814] 00:44:19.232 bw ( KiB/s): min=10048, max=11264, per=24.73%, avg=10816.00, stdev=313.90, samples=10 00:44:19.232 iops : min= 1256, max= 1408, avg=1352.00, stdev=39.24, samples=10 00:44:19.232 lat (msec) : 2=0.16%, 4=1.58%, 10=97.96%, 20=0.30% 00:44:19.232 cpu : usr=92.48%, sys=6.92%, ctx=7, majf=0, minf=1634 00:44:19.232 IO depths : 1=0.3%, 2=16.3%, 4=56.1%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:19.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:19.232 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:19.232 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:19.232 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:19.232 filename1: (groupid=0, jobs=1): err= 0: pid=3214237: Sun Nov 17 03:04:27 2024 00:44:19.232 read: IOPS=1385, BW=10.8MiB/s (11.4MB/s)(54.2MiB/5004msec) 00:44:19.232 slat (nsec): min=6875, max=45529, avg=15890.24, stdev=5236.61 00:44:19.232 clat (usec): min=1542, max=14062, avg=5712.98, stdev=824.91 00:44:19.232 lat (usec): min=1560, max=14085, avg=5728.87, stdev=824.78 00:44:19.232 clat percentiles (usec): 00:44:19.232 | 1.00th=[ 3556], 5.00th=[ 4621], 10.00th=[ 4948], 20.00th=[ 5276], 00:44:19.232 | 30.00th=[ 5407], 40.00th=[ 5538], 50.00th=[ 5735], 60.00th=[ 5866], 00:44:19.232 | 70.00th=[ 5932], 80.00th=[ 5997], 90.00th=[ 6390], 95.00th=[ 7242], 00:44:19.232 | 99.00th=[ 8586], 99.50th=[ 9241], 99.90th=[10290], 99.95th=[10945], 00:44:19.232 | 99.99th=[14091] 00:44:19.232 bw ( KiB/s): min=10624, max=11648, per=25.34%, avg=11083.70, stdev=309.97, samples=10 00:44:19.232 iops : min= 1328, max= 1456, avg=1385.40, stdev=38.82, samples=10 00:44:19.232 lat (msec) : 2=0.06%, 4=1.82%, 10=97.92%, 20=0.20% 00:44:19.232 cpu : usr=93.18%, sys=6.24%, ctx=8, majf=0, minf=1637 00:44:19.232 IO depths : 1=0.7%, 2=19.5%, 4=54.0%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:19.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:19.232 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:19.232 issued rwts: total=6934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:19.232 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:19.232 filename1: (groupid=0, jobs=1): err= 0: pid=3214238: Sun Nov 17 03:04:27 2024 00:44:19.232 read: IOPS=1281, BW=10.0MiB/s (10.5MB/s)(50.1MiB/5002msec) 00:44:19.232 slat (nsec): min=6492, max=54502, avg=16225.07, stdev=5856.91 00:44:19.232 clat (usec): min=1274, max=13271, avg=6185.77, stdev=1280.95 00:44:19.232 lat (usec): min=1293, max=13294, avg=6201.99, stdev=1280.59 00:44:19.232 clat percentiles (usec): 00:44:19.232 | 1.00th=[ 3523], 5.00th=[ 4752], 10.00th=[ 5145], 20.00th=[ 5473], 00:44:19.232 | 30.00th=[ 5604], 40.00th=[ 5800], 50.00th=[ 5866], 60.00th=[ 5997], 00:44:19.232 | 70.00th=[ 6128], 80.00th=[ 6718], 90.00th=[ 7898], 95.00th=[ 9241], 00:44:19.232 | 99.00th=[10421], 99.50th=[10683], 99.90th=[11076], 99.95th=[12256], 00:44:19.232 | 99.99th=[13304] 00:44:19.232 bw ( KiB/s): min= 9712, max=11168, per=23.42%, avg=10246.80, stdev=465.34, samples=10 00:44:19.232 iops : min= 1214, max= 1396, avg=1280.80, stdev=58.20, samples=10 00:44:19.232 lat (msec) : 2=0.06%, 4=1.67%, 10=95.69%, 20=2.57% 00:44:19.232 cpu : usr=93.66%, sys=5.76%, ctx=5, majf=0, minf=1636 00:44:19.232 IO depths : 1=0.4%, 2=11.1%, 4=61.0%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:19.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:19.232 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:19.232 issued rwts: total=6411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:19.232 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:19.232 00:44:19.232 Run status group 0 (all jobs): 00:44:19.232 READ: bw=42.7MiB/s (44.8MB/s), 10.0MiB/s-11.3MiB/s (10.5MB/s-11.9MB/s), io=214MiB (224MB), run=5002-5004msec 00:44:19.799 ----------------------------------------------------- 00:44:19.799 Suppressions used: 00:44:19.799 count bytes template 00:44:19.799 6 52 /usr/src/fio/parse.c 00:44:19.799 1 8 libtcmalloc_minimal.so 00:44:19.799 1 904 libcrypto.so 00:44:19.799 ----------------------------------------------------- 00:44:19.799 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:19.799 00:44:19.799 real 0m28.076s 00:44:19.799 user 4m34.962s 00:44:19.799 sys 0m7.826s 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:19.799 03:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:19.799 ************************************ 00:44:19.799 END TEST fio_dif_rand_params 00:44:19.799 ************************************ 00:44:20.058 03:04:28 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:44:20.058 03:04:28 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:20.058 03:04:28 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:20.058 03:04:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:20.058 ************************************ 00:44:20.058 START TEST fio_dif_digest 00:44:20.058 ************************************ 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:20.058 bdev_null0 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:20.058 [2024-11-17 03:04:28.345664] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:20.058 { 00:44:20.058 "params": { 00:44:20.058 "name": "Nvme$subsystem", 00:44:20.058 "trtype": "$TEST_TRANSPORT", 00:44:20.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:20.058 "adrfam": "ipv4", 00:44:20.058 "trsvcid": "$NVMF_PORT", 00:44:20.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:20.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:20.058 "hdgst": ${hdgst:-false}, 00:44:20.058 "ddgst": ${ddgst:-false} 00:44:20.058 }, 00:44:20.058 "method": "bdev_nvme_attach_controller" 00:44:20.058 } 00:44:20.058 EOF 00:44:20.058 )") 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:44:20.058 03:04:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:44:20.059 03:04:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:44:20.059 03:04:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:20.059 "params": { 00:44:20.059 "name": "Nvme0", 00:44:20.059 "trtype": "tcp", 00:44:20.059 "traddr": "10.0.0.2", 00:44:20.059 "adrfam": "ipv4", 00:44:20.059 "trsvcid": "4420", 00:44:20.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:20.059 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:20.059 "hdgst": true, 00:44:20.059 "ddgst": true 00:44:20.059 }, 00:44:20.059 "method": "bdev_nvme_attach_controller" 00:44:20.059 }' 00:44:20.059 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:20.059 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:20.059 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # break 00:44:20.059 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:20.059 03:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:20.317 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:44:20.317 ... 00:44:20.317 fio-3.35 00:44:20.317 Starting 3 threads 00:44:32.581 00:44:32.581 filename0: (groupid=0, jobs=1): err= 0: pid=3215223: Sun Nov 17 03:04:39 2024 00:44:32.581 read: IOPS=171, BW=21.4MiB/s (22.5MB/s)(215MiB/10043msec) 00:44:32.581 slat (nsec): min=5894, max=43575, avg=21321.84, stdev=2416.93 00:44:32.581 clat (usec): min=14084, max=55039, avg=17437.65, stdev=1610.50 00:44:32.581 lat (usec): min=14105, max=55060, avg=17458.97, stdev=1610.52 00:44:32.581 clat percentiles (usec): 00:44:32.581 | 1.00th=[14877], 5.00th=[15664], 10.00th=[16057], 20.00th=[16581], 00:44:32.581 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:44:32.581 | 70.00th=[17957], 80.00th=[18220], 90.00th=[18744], 95.00th=[19006], 00:44:32.581 | 99.00th=[20055], 99.50th=[20317], 99.90th=[51119], 99.95th=[54789], 00:44:32.581 | 99.99th=[54789] 00:44:32.581 bw ( KiB/s): min=21461, max=22784, per=32.88%, avg=22026.65, stdev=297.53, samples=20 00:44:32.581 iops : min= 167, max= 178, avg=172.05, stdev= 2.39, samples=20 00:44:32.581 lat (msec) : 20=98.84%, 50=1.04%, 100=0.12% 00:44:32.581 cpu : usr=93.87%, sys=5.53%, ctx=16, majf=0, minf=1636 00:44:32.581 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:32.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:32.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:32.581 issued rwts: total=1723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:32.581 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:32.581 filename0: (groupid=0, jobs=1): err= 0: pid=3215224: Sun Nov 17 03:04:39 2024 00:44:32.581 read: IOPS=184, BW=23.1MiB/s (24.2MB/s)(232MiB/10047msec) 00:44:32.581 slat (nsec): min=9529, max=46454, avg=21770.86, stdev=2556.82 00:44:32.581 clat (usec): min=12631, max=53660, avg=16190.39, stdev=1512.96 00:44:32.581 lat (usec): min=12652, max=53682, avg=16212.16, stdev=1513.02 00:44:32.581 clat percentiles (usec): 00:44:32.581 | 1.00th=[13698], 5.00th=[14353], 10.00th=[14877], 20.00th=[15270], 00:44:32.581 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16188], 60.00th=[16450], 00:44:32.581 | 70.00th=[16712], 80.00th=[16909], 90.00th=[17433], 95.00th=[17695], 00:44:32.581 | 99.00th=[18482], 99.50th=[18744], 99.90th=[47449], 99.95th=[53740], 00:44:32.581 | 99.99th=[53740] 00:44:32.581 bw ( KiB/s): min=22784, max=24320, per=35.43%, avg=23731.20, stdev=424.32, samples=20 00:44:32.581 iops : min= 178, max= 190, avg=185.40, stdev= 3.32, samples=20 00:44:32.581 lat (msec) : 20=99.84%, 50=0.11%, 100=0.05% 00:44:32.581 cpu : usr=93.51%, sys=5.78%, ctx=63, majf=0, minf=1636 00:44:32.581 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:32.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:32.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:32.581 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:32.581 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:32.581 filename0: (groupid=0, jobs=1): err= 0: pid=3215225: Sun Nov 17 03:04:39 2024 00:44:32.581 read: IOPS=167, BW=20.9MiB/s (21.9MB/s)(210MiB/10045msec) 00:44:32.581 slat (nsec): min=5835, max=39713, avg=21060.82, stdev=2196.66 00:44:32.581 clat (usec): min=14393, max=53789, avg=17896.49, stdev=1613.27 00:44:32.581 lat (usec): min=14413, max=53809, avg=17917.55, stdev=1613.20 00:44:32.581 clat percentiles (usec): 00:44:32.581 | 1.00th=[15401], 5.00th=[16188], 10.00th=[16450], 20.00th=[16909], 00:44:32.581 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17695], 60.00th=[17957], 00:44:32.581 | 70.00th=[18482], 80.00th=[18744], 90.00th=[19268], 95.00th=[19792], 00:44:32.581 | 99.00th=[20841], 99.50th=[21365], 99.90th=[46400], 99.95th=[53740], 00:44:32.581 | 99.99th=[53740] 00:44:32.581 bw ( KiB/s): min=20736, max=22016, per=32.05%, avg=21467.70, stdev=389.43, samples=20 00:44:32.581 iops : min= 162, max= 172, avg=167.70, stdev= 3.06, samples=20 00:44:32.581 lat (msec) : 20=96.78%, 50=3.16%, 100=0.06% 00:44:32.581 cpu : usr=93.94%, sys=5.47%, ctx=15, majf=0, minf=1637 00:44:32.581 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:32.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:32.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:32.581 issued rwts: total=1679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:32.581 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:32.581 00:44:32.581 Run status group 0 (all jobs): 00:44:32.581 READ: bw=65.4MiB/s (68.6MB/s), 20.9MiB/s-23.1MiB/s (21.9MB/s-24.2MB/s), io=657MiB (689MB), run=10043-10047msec 00:44:32.581 ----------------------------------------------------- 00:44:32.581 Suppressions used: 00:44:32.581 count bytes template 00:44:32.581 5 44 /usr/src/fio/parse.c 00:44:32.581 1 8 libtcmalloc_minimal.so 00:44:32.581 1 904 libcrypto.so 00:44:32.581 ----------------------------------------------------- 00:44:32.581 00:44:32.581 03:04:40 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:44:32.581 03:04:40 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:44:32.581 03:04:40 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:44:32.581 03:04:40 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:32.581 03:04:40 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:44:32.581 03:04:40 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:32.581 03:04:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:32.581 03:04:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:32.581 03:04:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:32.581 03:04:40 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:32.581 03:04:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:32.581 03:04:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:32.581 03:04:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:32.581 00:44:32.581 real 0m12.521s 00:44:32.581 user 0m30.571s 00:44:32.581 sys 0m2.240s 00:44:32.581 03:04:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:32.581 03:04:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:32.581 ************************************ 00:44:32.581 END TEST fio_dif_digest 00:44:32.581 ************************************ 00:44:32.581 03:04:40 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:44:32.581 03:04:40 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:44:32.581 03:04:40 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:32.581 03:04:40 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:44:32.581 03:04:40 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:32.581 03:04:40 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:44:32.581 03:04:40 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:32.581 03:04:40 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:32.581 rmmod nvme_tcp 00:44:32.581 rmmod nvme_fabrics 00:44:32.581 rmmod nvme_keyring 00:44:32.581 03:04:40 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:32.581 03:04:40 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:44:32.581 03:04:40 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:44:32.581 03:04:40 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3208333 ']' 00:44:32.581 03:04:40 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3208333 00:44:32.581 03:04:40 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3208333 ']' 00:44:32.581 03:04:40 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3208333 00:44:32.581 03:04:40 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:44:32.581 03:04:40 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:32.581 03:04:40 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3208333 00:44:32.581 03:04:40 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:32.581 03:04:40 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:32.581 03:04:40 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3208333' 00:44:32.581 killing process with pid 3208333 00:44:32.581 03:04:40 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3208333 00:44:32.581 03:04:40 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3208333 00:44:33.955 03:04:42 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:44:33.955 03:04:42 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:34.890 Waiting for block devices as requested 00:44:34.890 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:44:34.890 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:35.148 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:35.148 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:35.148 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:35.148 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:35.407 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:35.407 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:35.407 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:35.407 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:35.665 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:35.665 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:35.665 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:35.665 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:35.665 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:35.924 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:35.924 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:35.924 03:04:44 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:35.924 03:04:44 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:35.924 03:04:44 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:44:35.924 03:04:44 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:44:35.924 03:04:44 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:35.924 03:04:44 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:44:35.924 03:04:44 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:35.924 03:04:44 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:35.924 03:04:44 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:35.924 03:04:44 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:35.924 03:04:44 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:38.455 03:04:46 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:38.455 00:44:38.455 real 1m16.079s 00:44:38.455 user 6m44.528s 00:44:38.455 sys 0m19.580s 00:44:38.455 03:04:46 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:38.455 03:04:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:38.455 ************************************ 00:44:38.455 END TEST nvmf_dif 00:44:38.455 ************************************ 00:44:38.455 03:04:46 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:38.455 03:04:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:38.455 03:04:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:38.455 03:04:46 -- common/autotest_common.sh@10 -- # set +x 00:44:38.455 ************************************ 00:44:38.455 START TEST nvmf_abort_qd_sizes 00:44:38.455 ************************************ 00:44:38.455 03:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:38.455 * Looking for test storage... 00:44:38.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:38.455 03:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:44:38.455 03:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:44:38.455 03:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:44:38.455 03:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:44:38.455 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:38.455 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:44:38.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.456 --rc genhtml_branch_coverage=1 00:44:38.456 --rc genhtml_function_coverage=1 00:44:38.456 --rc genhtml_legend=1 00:44:38.456 --rc geninfo_all_blocks=1 00:44:38.456 --rc geninfo_unexecuted_blocks=1 00:44:38.456 00:44:38.456 ' 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:44:38.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.456 --rc genhtml_branch_coverage=1 00:44:38.456 --rc genhtml_function_coverage=1 00:44:38.456 --rc genhtml_legend=1 00:44:38.456 --rc geninfo_all_blocks=1 00:44:38.456 --rc geninfo_unexecuted_blocks=1 00:44:38.456 00:44:38.456 ' 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:44:38.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.456 --rc genhtml_branch_coverage=1 00:44:38.456 --rc genhtml_function_coverage=1 00:44:38.456 --rc genhtml_legend=1 00:44:38.456 --rc geninfo_all_blocks=1 00:44:38.456 --rc geninfo_unexecuted_blocks=1 00:44:38.456 00:44:38.456 ' 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:44:38.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.456 --rc genhtml_branch_coverage=1 00:44:38.456 --rc genhtml_function_coverage=1 00:44:38.456 --rc genhtml_legend=1 00:44:38.456 --rc geninfo_all_blocks=1 00:44:38.456 --rc geninfo_unexecuted_blocks=1 00:44:38.456 00:44:38.456 ' 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:38.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:44:38.456 03:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:40.358 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:44:40.359 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:44:40.359 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:44:40.359 Found net devices under 0000:0a:00.0: cvl_0_0 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:44:40.359 Found net devices under 0000:0a:00.1: cvl_0_1 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:40.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:40.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:44:40.359 00:44:40.359 --- 10.0.0.2 ping statistics --- 00:44:40.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:40.359 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:40.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:40.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:44:40.359 00:44:40.359 --- 10.0.0.1 ping statistics --- 00:44:40.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:40.359 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:44:40.359 03:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:41.746 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:41.746 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:41.746 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:41.746 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:41.746 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:41.746 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:41.746 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:41.746 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:41.746 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:41.746 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:41.746 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:41.746 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:41.746 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:41.746 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:41.746 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:41.746 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:42.314 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:44:42.573 03:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:42.573 03:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:42.573 03:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:42.573 03:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:42.573 03:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:42.573 03:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:42.573 03:04:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:44:42.573 03:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:42.573 03:04:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:42.573 03:04:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:42.573 03:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3220211 00:44:42.573 03:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:44:42.573 03:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3220211 00:44:42.573 03:04:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3220211 ']' 00:44:42.573 03:04:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:42.573 03:04:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:42.573 03:04:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:42.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:42.573 03:04:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:42.573 03:04:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:42.832 [2024-11-17 03:04:51.046732] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:44:42.832 [2024-11-17 03:04:51.046879] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:42.832 [2024-11-17 03:04:51.191699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:43.090 [2024-11-17 03:04:51.312888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:43.090 [2024-11-17 03:04:51.312972] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:43.090 [2024-11-17 03:04:51.312996] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:43.090 [2024-11-17 03:04:51.313016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:43.090 [2024-11-17 03:04:51.313032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:43.091 [2024-11-17 03:04:51.315492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:43.091 [2024-11-17 03:04:51.315555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:43.091 [2024-11-17 03:04:51.315600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:43.091 [2024-11-17 03:04:51.315620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:43.657 03:04:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:43.657 03:04:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:44:43.657 03:04:51 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:43.657 03:04:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:43.657 03:04:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:43.657 03:04:52 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:43.657 03:04:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:44:43.657 03:04:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:44:43.657 03:04:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:44:43.657 03:04:52 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:44:43.657 03:04:52 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:44:43.657 03:04:52 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:44:43.657 03:04:52 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:44:43.657 03:04:52 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:44:43.657 03:04:52 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:44:43.657 03:04:52 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:44:43.657 03:04:52 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:44:43.657 03:04:52 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:44:43.657 03:04:52 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:44:43.657 03:04:52 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:44:43.657 03:04:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:44:43.657 03:04:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:44:43.657 03:04:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:44:43.657 03:04:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:43.657 03:04:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:43.657 03:04:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:43.657 ************************************ 00:44:43.657 START TEST spdk_target_abort 00:44:43.657 ************************************ 00:44:43.657 03:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:44:43.657 03:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:44:43.657 03:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:44:43.657 03:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:43.657 03:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:46.937 spdk_targetn1 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:46.937 [2024-11-17 03:04:54.934262] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:46.937 [2024-11-17 03:04:54.981609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:46.937 03:04:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:50.220 Initializing NVMe Controllers 00:44:50.220 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:50.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:50.220 Initialization complete. Launching workers. 00:44:50.220 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9464, failed: 0 00:44:50.220 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1271, failed to submit 8193 00:44:50.220 success 787, unsuccessful 484, failed 0 00:44:50.220 03:04:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:50.220 03:04:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:53.540 Initializing NVMe Controllers 00:44:53.540 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:53.540 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:53.540 Initialization complete. Launching workers. 00:44:53.540 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8402, failed: 0 00:44:53.540 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1247, failed to submit 7155 00:44:53.540 success 287, unsuccessful 960, failed 0 00:44:53.540 03:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:53.540 03:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:56.820 Initializing NVMe Controllers 00:44:56.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:56.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:56.820 Initialization complete. Launching workers. 00:44:56.820 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27492, failed: 0 00:44:56.820 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2668, failed to submit 24824 00:44:56.820 success 252, unsuccessful 2416, failed 0 00:44:56.820 03:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:44:56.820 03:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.820 03:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:56.820 03:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.820 03:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:44:56.820 03:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.820 03:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:58.192 03:05:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:58.192 03:05:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3220211 00:44:58.192 03:05:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3220211 ']' 00:44:58.192 03:05:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3220211 00:44:58.192 03:05:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:44:58.192 03:05:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:58.192 03:05:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3220211 00:44:58.192 03:05:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:58.192 03:05:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:58.192 03:05:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3220211' 00:44:58.192 killing process with pid 3220211 00:44:58.192 03:05:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3220211 00:44:58.192 03:05:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3220211 00:44:59.128 00:44:59.128 real 0m15.340s 00:44:59.128 user 0m59.700s 00:44:59.128 sys 0m2.896s 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:59.128 ************************************ 00:44:59.128 END TEST spdk_target_abort 00:44:59.128 ************************************ 00:44:59.128 03:05:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:44:59.128 03:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:59.128 03:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:59.128 03:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:59.128 ************************************ 00:44:59.128 START TEST kernel_target_abort 00:44:59.128 ************************************ 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:44:59.128 03:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:00.062 Waiting for block devices as requested 00:45:00.321 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:45:00.321 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:00.579 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:00.580 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:00.580 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:00.580 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:00.839 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:00.839 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:00.839 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:00.839 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:01.098 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:01.098 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:01.098 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:01.098 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:01.357 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:01.357 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:01.357 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:45:01.924 No valid GPT data, bailing 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:45:01.924 00:45:01.924 Discovery Log Number of Records 2, Generation counter 2 00:45:01.924 =====Discovery Log Entry 0====== 00:45:01.924 trtype: tcp 00:45:01.924 adrfam: ipv4 00:45:01.924 subtype: current discovery subsystem 00:45:01.924 treq: not specified, sq flow control disable supported 00:45:01.924 portid: 1 00:45:01.924 trsvcid: 4420 00:45:01.924 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:45:01.924 traddr: 10.0.0.1 00:45:01.924 eflags: none 00:45:01.924 sectype: none 00:45:01.924 =====Discovery Log Entry 1====== 00:45:01.924 trtype: tcp 00:45:01.924 adrfam: ipv4 00:45:01.924 subtype: nvme subsystem 00:45:01.924 treq: not specified, sq flow control disable supported 00:45:01.924 portid: 1 00:45:01.924 trsvcid: 4420 00:45:01.924 subnqn: nqn.2016-06.io.spdk:testnqn 00:45:01.924 traddr: 10.0.0.1 00:45:01.924 eflags: none 00:45:01.924 sectype: none 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:01.924 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:01.925 03:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:05.205 Initializing NVMe Controllers 00:45:05.205 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:05.205 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:05.205 Initialization complete. Launching workers. 00:45:05.205 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36820, failed: 0 00:45:05.205 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36820, failed to submit 0 00:45:05.205 success 0, unsuccessful 36820, failed 0 00:45:05.205 03:05:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:05.205 03:05:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:08.485 Initializing NVMe Controllers 00:45:08.485 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:08.485 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:08.485 Initialization complete. Launching workers. 00:45:08.485 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65527, failed: 0 00:45:08.485 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16526, failed to submit 49001 00:45:08.485 success 0, unsuccessful 16526, failed 0 00:45:08.485 03:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:08.485 03:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:11.762 Initializing NVMe Controllers 00:45:11.762 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:11.762 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:11.762 Initialization complete. Launching workers. 00:45:11.762 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63439, failed: 0 00:45:11.762 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15850, failed to submit 47589 00:45:11.762 success 0, unsuccessful 15850, failed 0 00:45:11.762 03:05:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:45:11.762 03:05:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:45:11.762 03:05:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:45:11.762 03:05:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:11.762 03:05:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:11.762 03:05:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:45:11.762 03:05:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:11.762 03:05:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:45:11.762 03:05:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:45:11.762 03:05:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:12.698 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:45:12.698 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:45:12.698 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:45:12.698 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:45:12.698 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:45:12.698 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:45:12.698 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:45:12.698 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:45:12.698 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:45:12.698 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:45:12.698 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:45:12.956 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:45:12.956 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:45:12.956 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:45:12.956 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:45:12.956 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:45:13.891 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:45:13.891 00:45:13.891 real 0m14.751s 00:45:13.891 user 0m7.265s 00:45:13.891 sys 0m3.364s 00:45:13.891 03:05:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:13.891 03:05:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:13.891 ************************************ 00:45:13.891 END TEST kernel_target_abort 00:45:13.891 ************************************ 00:45:13.891 03:05:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:45:13.891 03:05:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:45:13.891 03:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:13.891 03:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:45:13.891 03:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:13.891 03:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:45:13.891 03:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:13.891 03:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:13.891 rmmod nvme_tcp 00:45:13.891 rmmod nvme_fabrics 00:45:13.891 rmmod nvme_keyring 00:45:13.891 03:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:13.891 03:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:45:13.891 03:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:45:13.891 03:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3220211 ']' 00:45:13.891 03:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3220211 00:45:13.891 03:05:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3220211 ']' 00:45:13.891 03:05:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3220211 00:45:13.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3220211) - No such process 00:45:13.891 03:05:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3220211 is not found' 00:45:13.891 Process with pid 3220211 is not found 00:45:13.891 03:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:45:13.891 03:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:15.265 Waiting for block devices as requested 00:45:15.265 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:45:15.265 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:15.265 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:15.524 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:15.524 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:15.524 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:15.524 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:15.524 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:15.782 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:15.782 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:15.782 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:15.782 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:16.041 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:16.041 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:16.041 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:16.300 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:16.300 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:16.300 03:05:24 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:16.300 03:05:24 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:16.300 03:05:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:45:16.300 03:05:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:45:16.300 03:05:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:16.300 03:05:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:45:16.300 03:05:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:16.300 03:05:24 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:16.300 03:05:24 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:16.300 03:05:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:16.300 03:05:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:18.830 03:05:26 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:18.830 00:45:18.830 real 0m40.290s 00:45:18.830 user 1m9.351s 00:45:18.830 sys 0m9.742s 00:45:18.830 03:05:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:18.830 03:05:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:18.830 ************************************ 00:45:18.830 END TEST nvmf_abort_qd_sizes 00:45:18.830 ************************************ 00:45:18.830 03:05:26 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:18.830 03:05:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:18.830 03:05:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:18.830 03:05:26 -- common/autotest_common.sh@10 -- # set +x 00:45:18.830 ************************************ 00:45:18.830 START TEST keyring_file 00:45:18.830 ************************************ 00:45:18.830 03:05:26 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:18.830 * Looking for test storage... 00:45:18.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:18.830 03:05:26 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:18.830 03:05:26 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:45:18.830 03:05:26 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:18.830 03:05:26 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@345 -- # : 1 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@353 -- # local d=1 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@355 -- # echo 1 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@353 -- # local d=2 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@355 -- # echo 2 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:18.830 03:05:26 keyring_file -- scripts/common.sh@368 -- # return 0 00:45:18.830 03:05:26 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:18.830 03:05:26 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:18.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:18.830 --rc genhtml_branch_coverage=1 00:45:18.830 --rc genhtml_function_coverage=1 00:45:18.830 --rc genhtml_legend=1 00:45:18.830 --rc geninfo_all_blocks=1 00:45:18.830 --rc geninfo_unexecuted_blocks=1 00:45:18.830 00:45:18.830 ' 00:45:18.830 03:05:26 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:18.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:18.830 --rc genhtml_branch_coverage=1 00:45:18.830 --rc genhtml_function_coverage=1 00:45:18.830 --rc genhtml_legend=1 00:45:18.830 --rc geninfo_all_blocks=1 00:45:18.830 --rc geninfo_unexecuted_blocks=1 00:45:18.830 00:45:18.830 ' 00:45:18.830 03:05:26 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:18.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:18.830 --rc genhtml_branch_coverage=1 00:45:18.830 --rc genhtml_function_coverage=1 00:45:18.830 --rc genhtml_legend=1 00:45:18.830 --rc geninfo_all_blocks=1 00:45:18.830 --rc geninfo_unexecuted_blocks=1 00:45:18.830 00:45:18.830 ' 00:45:18.830 03:05:26 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:18.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:18.830 --rc genhtml_branch_coverage=1 00:45:18.830 --rc genhtml_function_coverage=1 00:45:18.830 --rc genhtml_legend=1 00:45:18.830 --rc geninfo_all_blocks=1 00:45:18.830 --rc geninfo_unexecuted_blocks=1 00:45:18.830 00:45:18.830 ' 00:45:18.830 03:05:26 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:18.830 03:05:26 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:18.830 03:05:26 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:45:18.830 03:05:26 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:18.830 03:05:26 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:18.830 03:05:26 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:18.830 03:05:26 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:18.830 03:05:26 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:18.830 03:05:26 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:18.830 03:05:26 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:18.830 03:05:26 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:18.831 03:05:26 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:45:18.831 03:05:26 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:18.831 03:05:26 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:18.831 03:05:26 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:18.831 03:05:26 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:18.831 03:05:26 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:18.831 03:05:26 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:18.831 03:05:26 keyring_file -- paths/export.sh@5 -- # export PATH 00:45:18.831 03:05:26 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@51 -- # : 0 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:18.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:18.831 03:05:26 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:18.831 03:05:26 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:18.831 03:05:26 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:18.831 03:05:26 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:45:18.831 03:05:26 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:45:18.831 03:05:26 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:45:18.831 03:05:26 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:18.831 03:05:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:18.831 03:05:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:18.831 03:05:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:18.831 03:05:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:18.831 03:05:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:18.831 03:05:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qKrMYfbWcN 00:45:18.831 03:05:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:45:18.831 03:05:26 keyring_file -- nvmf/common.sh@733 -- # python - 00:45:18.831 03:05:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qKrMYfbWcN 00:45:18.831 03:05:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qKrMYfbWcN 00:45:18.831 03:05:26 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.qKrMYfbWcN 00:45:18.831 03:05:26 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:45:18.831 03:05:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:18.831 03:05:26 keyring_file -- keyring/common.sh@17 -- # name=key1 00:45:18.831 03:05:26 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:18.831 03:05:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:18.831 03:05:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:18.831 03:05:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Y16BVkTqwp 00:45:18.831 03:05:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:18.831 03:05:27 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:18.831 03:05:27 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:45:18.831 03:05:27 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:18.831 03:05:27 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:45:18.831 03:05:27 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:45:18.831 03:05:27 keyring_file -- nvmf/common.sh@733 -- # python - 00:45:18.831 03:05:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Y16BVkTqwp 00:45:18.831 03:05:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Y16BVkTqwp 00:45:18.831 03:05:27 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Y16BVkTqwp 00:45:18.831 03:05:27 keyring_file -- keyring/file.sh@30 -- # tgtpid=3226512 00:45:18.831 03:05:27 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:18.831 03:05:27 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3226512 00:45:18.831 03:05:27 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3226512 ']' 00:45:18.831 03:05:27 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:18.831 03:05:27 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:18.831 03:05:27 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:18.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:18.831 03:05:27 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:18.831 03:05:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:18.831 [2024-11-17 03:05:27.142484] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:45:18.831 [2024-11-17 03:05:27.142625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3226512 ] 00:45:19.089 [2024-11-17 03:05:27.290645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:19.089 [2024-11-17 03:05:27.431000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:20.022 03:05:28 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:20.022 03:05:28 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:45:20.022 03:05:28 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:45:20.022 03:05:28 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.022 03:05:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:20.022 [2024-11-17 03:05:28.352221] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:20.022 null0 00:45:20.022 [2024-11-17 03:05:28.384237] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:20.022 [2024-11-17 03:05:28.384852] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:20.022 03:05:28 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:20.022 03:05:28 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:20.022 03:05:28 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:20.022 03:05:28 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:20.022 03:05:28 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:45:20.022 03:05:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:20.022 03:05:28 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:45:20.022 03:05:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:20.022 03:05:28 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:20.022 03:05:28 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.022 03:05:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:20.022 [2024-11-17 03:05:28.408270] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:45:20.022 request: 00:45:20.022 { 00:45:20.022 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:45:20.022 "secure_channel": false, 00:45:20.022 "listen_address": { 00:45:20.022 "trtype": "tcp", 00:45:20.022 "traddr": "127.0.0.1", 00:45:20.022 "trsvcid": "4420" 00:45:20.022 }, 00:45:20.022 "method": "nvmf_subsystem_add_listener", 00:45:20.022 "req_id": 1 00:45:20.022 } 00:45:20.022 Got JSON-RPC error response 00:45:20.022 response: 00:45:20.022 { 00:45:20.022 "code": -32602, 00:45:20.022 "message": "Invalid parameters" 00:45:20.022 } 00:45:20.023 03:05:28 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:45:20.023 03:05:28 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:20.023 03:05:28 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:20.023 03:05:28 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:20.023 03:05:28 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:20.023 03:05:28 keyring_file -- keyring/file.sh@47 -- # bperfpid=3226653 00:45:20.023 03:05:28 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:45:20.023 03:05:28 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3226653 /var/tmp/bperf.sock 00:45:20.023 03:05:28 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3226653 ']' 00:45:20.023 03:05:28 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:20.023 03:05:28 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:20.023 03:05:28 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:20.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:20.023 03:05:28 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:20.023 03:05:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:20.281 [2024-11-17 03:05:28.498872] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:45:20.281 [2024-11-17 03:05:28.499010] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3226653 ] 00:45:20.281 [2024-11-17 03:05:28.638198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:20.539 [2024-11-17 03:05:28.765922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:21.104 03:05:29 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:21.104 03:05:29 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:45:21.104 03:05:29 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qKrMYfbWcN 00:45:21.104 03:05:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qKrMYfbWcN 00:45:21.361 03:05:29 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Y16BVkTqwp 00:45:21.361 03:05:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Y16BVkTqwp 00:45:21.619 03:05:29 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:45:21.619 03:05:29 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:45:21.619 03:05:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:21.619 03:05:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:21.619 03:05:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:21.878 03:05:30 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.qKrMYfbWcN == \/\t\m\p\/\t\m\p\.\q\K\r\M\Y\f\b\W\c\N ]] 00:45:21.878 03:05:30 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:45:21.878 03:05:30 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:45:21.878 03:05:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:21.878 03:05:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:21.878 03:05:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:22.136 03:05:30 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.Y16BVkTqwp == \/\t\m\p\/\t\m\p\.\Y\1\6\B\V\k\T\q\w\p ]] 00:45:22.136 03:05:30 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:45:22.136 03:05:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:22.136 03:05:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:22.136 03:05:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:22.136 03:05:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:22.136 03:05:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:22.395 03:05:30 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:45:22.395 03:05:30 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:45:22.395 03:05:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:22.395 03:05:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:22.395 03:05:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:22.395 03:05:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:22.395 03:05:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:22.653 03:05:31 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:45:22.653 03:05:31 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:22.653 03:05:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:22.912 [2024-11-17 03:05:31.361597] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:23.169 nvme0n1 00:45:23.170 03:05:31 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:45:23.170 03:05:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:23.170 03:05:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:23.170 03:05:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:23.170 03:05:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:23.170 03:05:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:23.428 03:05:31 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:45:23.428 03:05:31 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:45:23.428 03:05:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:23.428 03:05:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:23.428 03:05:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:23.428 03:05:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:23.428 03:05:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:23.686 03:05:32 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:45:23.686 03:05:32 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:23.686 Running I/O for 1 seconds... 00:45:25.059 6467.00 IOPS, 25.26 MiB/s 00:45:25.059 Latency(us) 00:45:25.059 [2024-11-17T02:05:33.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:25.059 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:45:25.059 nvme0n1 : 1.01 6516.92 25.46 0.00 0.00 19547.04 9126.49 33204.91 00:45:25.059 [2024-11-17T02:05:33.519Z] =================================================================================================================== 00:45:25.059 [2024-11-17T02:05:33.519Z] Total : 6516.92 25.46 0.00 0.00 19547.04 9126.49 33204.91 00:45:25.059 { 00:45:25.059 "results": [ 00:45:25.059 { 00:45:25.059 "job": "nvme0n1", 00:45:25.059 "core_mask": "0x2", 00:45:25.060 "workload": "randrw", 00:45:25.060 "percentage": 50, 00:45:25.060 "status": "finished", 00:45:25.060 "queue_depth": 128, 00:45:25.060 "io_size": 4096, 00:45:25.060 "runtime": 1.012135, 00:45:25.060 "iops": 6516.917209660766, 00:45:25.060 "mibps": 25.45670785023737, 00:45:25.060 "io_failed": 0, 00:45:25.060 "io_timeout": 0, 00:45:25.060 "avg_latency_us": 19547.036021831416, 00:45:25.060 "min_latency_us": 9126.494814814814, 00:45:25.060 "max_latency_us": 33204.90666666667 00:45:25.060 } 00:45:25.060 ], 00:45:25.060 "core_count": 1 00:45:25.060 } 00:45:25.060 03:05:33 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:25.060 03:05:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:25.060 03:05:33 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:45:25.060 03:05:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:25.060 03:05:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:25.060 03:05:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:25.060 03:05:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:25.060 03:05:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:25.318 03:05:33 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:45:25.318 03:05:33 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:45:25.318 03:05:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:25.318 03:05:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:25.318 03:05:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:25.318 03:05:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:25.318 03:05:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:25.576 03:05:33 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:45:25.576 03:05:33 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:25.576 03:05:33 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:25.576 03:05:33 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:25.576 03:05:33 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:25.576 03:05:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:25.576 03:05:33 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:25.576 03:05:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:25.576 03:05:33 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:25.576 03:05:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:25.835 [2024-11-17 03:05:34.231402] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:25.835 [2024-11-17 03:05:34.231522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (107): Transport endpoint is not connected 00:45:25.835 [2024-11-17 03:05:34.232492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (9): Bad file descriptor 00:45:25.835 [2024-11-17 03:05:34.233489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:45:25.835 [2024-11-17 03:05:34.233525] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:25.835 [2024-11-17 03:05:34.233550] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:25.835 [2024-11-17 03:05:34.233577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:45:25.835 request: 00:45:25.835 { 00:45:25.835 "name": "nvme0", 00:45:25.835 "trtype": "tcp", 00:45:25.835 "traddr": "127.0.0.1", 00:45:25.835 "adrfam": "ipv4", 00:45:25.835 "trsvcid": "4420", 00:45:25.835 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:25.835 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:25.835 "prchk_reftag": false, 00:45:25.835 "prchk_guard": false, 00:45:25.835 "hdgst": false, 00:45:25.835 "ddgst": false, 00:45:25.835 "psk": "key1", 00:45:25.835 "allow_unrecognized_csi": false, 00:45:25.835 "method": "bdev_nvme_attach_controller", 00:45:25.835 "req_id": 1 00:45:25.835 } 00:45:25.835 Got JSON-RPC error response 00:45:25.835 response: 00:45:25.835 { 00:45:25.835 "code": -5, 00:45:25.835 "message": "Input/output error" 00:45:25.835 } 00:45:25.835 03:05:34 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:25.835 03:05:34 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:25.835 03:05:34 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:25.835 03:05:34 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:25.835 03:05:34 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:45:25.835 03:05:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:25.835 03:05:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:25.835 03:05:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:25.835 03:05:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:25.835 03:05:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:26.093 03:05:34 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:45:26.093 03:05:34 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:45:26.093 03:05:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:26.093 03:05:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:26.093 03:05:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:26.093 03:05:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:26.093 03:05:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:26.405 03:05:34 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:45:26.405 03:05:34 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:45:26.405 03:05:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:26.711 03:05:35 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:45:26.711 03:05:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:45:26.993 03:05:35 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:45:26.993 03:05:35 keyring_file -- keyring/file.sh@78 -- # jq length 00:45:26.994 03:05:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:27.252 03:05:35 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:45:27.252 03:05:35 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.qKrMYfbWcN 00:45:27.252 03:05:35 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.qKrMYfbWcN 00:45:27.252 03:05:35 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:27.252 03:05:35 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.qKrMYfbWcN 00:45:27.252 03:05:35 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:27.252 03:05:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:27.252 03:05:35 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:27.252 03:05:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:27.252 03:05:35 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qKrMYfbWcN 00:45:27.252 03:05:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qKrMYfbWcN 00:45:27.511 [2024-11-17 03:05:35.887674] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.qKrMYfbWcN': 0100660 00:45:27.511 [2024-11-17 03:05:35.887732] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:45:27.511 request: 00:45:27.511 { 00:45:27.511 "name": "key0", 00:45:27.511 "path": "/tmp/tmp.qKrMYfbWcN", 00:45:27.511 "method": "keyring_file_add_key", 00:45:27.511 "req_id": 1 00:45:27.511 } 00:45:27.511 Got JSON-RPC error response 00:45:27.511 response: 00:45:27.511 { 00:45:27.511 "code": -1, 00:45:27.511 "message": "Operation not permitted" 00:45:27.511 } 00:45:27.511 03:05:35 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:27.511 03:05:35 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:27.511 03:05:35 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:27.511 03:05:35 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:27.511 03:05:35 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.qKrMYfbWcN 00:45:27.511 03:05:35 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qKrMYfbWcN 00:45:27.511 03:05:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qKrMYfbWcN 00:45:27.769 03:05:36 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.qKrMYfbWcN 00:45:27.769 03:05:36 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:45:27.769 03:05:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:27.769 03:05:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:27.769 03:05:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:27.769 03:05:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:27.769 03:05:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:28.027 03:05:36 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:45:28.027 03:05:36 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:28.027 03:05:36 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:28.027 03:05:36 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:28.027 03:05:36 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:28.027 03:05:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:28.027 03:05:36 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:28.027 03:05:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:28.027 03:05:36 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:28.027 03:05:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:28.285 [2024-11-17 03:05:36.697990] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.qKrMYfbWcN': No such file or directory 00:45:28.285 [2024-11-17 03:05:36.698046] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:45:28.285 [2024-11-17 03:05:36.698088] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:45:28.285 [2024-11-17 03:05:36.698141] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:45:28.285 [2024-11-17 03:05:36.698162] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:45:28.285 [2024-11-17 03:05:36.698181] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:45:28.285 request: 00:45:28.285 { 00:45:28.285 "name": "nvme0", 00:45:28.285 "trtype": "tcp", 00:45:28.285 "traddr": "127.0.0.1", 00:45:28.285 "adrfam": "ipv4", 00:45:28.285 "trsvcid": "4420", 00:45:28.285 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:28.285 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:28.285 "prchk_reftag": false, 00:45:28.285 "prchk_guard": false, 00:45:28.285 "hdgst": false, 00:45:28.285 "ddgst": false, 00:45:28.285 "psk": "key0", 00:45:28.285 "allow_unrecognized_csi": false, 00:45:28.285 "method": "bdev_nvme_attach_controller", 00:45:28.285 "req_id": 1 00:45:28.285 } 00:45:28.285 Got JSON-RPC error response 00:45:28.285 response: 00:45:28.285 { 00:45:28.285 "code": -19, 00:45:28.285 "message": "No such device" 00:45:28.285 } 00:45:28.285 03:05:36 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:28.285 03:05:36 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:28.285 03:05:36 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:28.285 03:05:36 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:28.285 03:05:36 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:45:28.285 03:05:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:28.543 03:05:36 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:28.543 03:05:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:28.543 03:05:36 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:28.543 03:05:36 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:28.543 03:05:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:28.543 03:05:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:28.543 03:05:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QsLIP4BsFY 00:45:28.543 03:05:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:28.543 03:05:36 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:28.543 03:05:36 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:45:28.543 03:05:36 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:28.543 03:05:36 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:28.543 03:05:36 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:45:28.543 03:05:36 keyring_file -- nvmf/common.sh@733 -- # python - 00:45:28.801 03:05:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QsLIP4BsFY 00:45:28.801 03:05:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QsLIP4BsFY 00:45:28.801 03:05:37 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.QsLIP4BsFY 00:45:28.801 03:05:37 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QsLIP4BsFY 00:45:28.801 03:05:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QsLIP4BsFY 00:45:29.059 03:05:37 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:29.059 03:05:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:29.317 nvme0n1 00:45:29.317 03:05:37 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:45:29.317 03:05:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:29.317 03:05:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:29.317 03:05:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:29.317 03:05:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:29.317 03:05:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:29.574 03:05:37 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:45:29.574 03:05:37 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:45:29.574 03:05:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:29.832 03:05:38 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:45:29.832 03:05:38 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:45:29.832 03:05:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:29.832 03:05:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:29.832 03:05:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:30.090 03:05:38 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:45:30.090 03:05:38 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:45:30.090 03:05:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:30.090 03:05:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:30.090 03:05:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:30.090 03:05:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:30.090 03:05:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:30.348 03:05:38 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:45:30.348 03:05:38 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:30.348 03:05:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:30.606 03:05:39 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:45:30.606 03:05:39 keyring_file -- keyring/file.sh@105 -- # jq length 00:45:30.606 03:05:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:30.864 03:05:39 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:45:30.864 03:05:39 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QsLIP4BsFY 00:45:30.864 03:05:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QsLIP4BsFY 00:45:31.121 03:05:39 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Y16BVkTqwp 00:45:31.121 03:05:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Y16BVkTqwp 00:45:31.687 03:05:39 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:31.687 03:05:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:31.944 nvme0n1 00:45:31.944 03:05:40 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:45:31.944 03:05:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:45:32.202 03:05:40 keyring_file -- keyring/file.sh@113 -- # config='{ 00:45:32.202 "subsystems": [ 00:45:32.202 { 00:45:32.202 "subsystem": "keyring", 00:45:32.202 "config": [ 00:45:32.202 { 00:45:32.202 "method": "keyring_file_add_key", 00:45:32.202 "params": { 00:45:32.202 "name": "key0", 00:45:32.202 "path": "/tmp/tmp.QsLIP4BsFY" 00:45:32.202 } 00:45:32.202 }, 00:45:32.202 { 00:45:32.202 "method": "keyring_file_add_key", 00:45:32.202 "params": { 00:45:32.202 "name": "key1", 00:45:32.202 "path": "/tmp/tmp.Y16BVkTqwp" 00:45:32.202 } 00:45:32.202 } 00:45:32.202 ] 00:45:32.202 }, 00:45:32.202 { 00:45:32.202 "subsystem": "iobuf", 00:45:32.202 "config": [ 00:45:32.202 { 00:45:32.202 "method": "iobuf_set_options", 00:45:32.202 "params": { 00:45:32.202 "small_pool_count": 8192, 00:45:32.202 "large_pool_count": 1024, 00:45:32.202 "small_bufsize": 8192, 00:45:32.202 "large_bufsize": 135168, 00:45:32.202 "enable_numa": false 00:45:32.202 } 00:45:32.202 } 00:45:32.202 ] 00:45:32.202 }, 00:45:32.202 { 00:45:32.202 "subsystem": "sock", 00:45:32.202 "config": [ 00:45:32.202 { 00:45:32.202 "method": "sock_set_default_impl", 00:45:32.202 "params": { 00:45:32.202 "impl_name": "posix" 00:45:32.202 } 00:45:32.202 }, 00:45:32.202 { 00:45:32.202 "method": "sock_impl_set_options", 00:45:32.202 "params": { 00:45:32.202 "impl_name": "ssl", 00:45:32.202 "recv_buf_size": 4096, 00:45:32.202 "send_buf_size": 4096, 00:45:32.202 "enable_recv_pipe": true, 00:45:32.202 "enable_quickack": false, 00:45:32.202 "enable_placement_id": 0, 00:45:32.202 "enable_zerocopy_send_server": true, 00:45:32.202 "enable_zerocopy_send_client": false, 00:45:32.202 "zerocopy_threshold": 0, 00:45:32.203 "tls_version": 0, 00:45:32.203 "enable_ktls": false 00:45:32.203 } 00:45:32.203 }, 00:45:32.203 { 00:45:32.203 "method": "sock_impl_set_options", 00:45:32.203 "params": { 00:45:32.203 "impl_name": "posix", 00:45:32.203 "recv_buf_size": 2097152, 00:45:32.203 "send_buf_size": 2097152, 00:45:32.203 "enable_recv_pipe": true, 00:45:32.203 "enable_quickack": false, 00:45:32.203 "enable_placement_id": 0, 00:45:32.203 "enable_zerocopy_send_server": true, 00:45:32.203 "enable_zerocopy_send_client": false, 00:45:32.203 "zerocopy_threshold": 0, 00:45:32.203 "tls_version": 0, 00:45:32.203 "enable_ktls": false 00:45:32.203 } 00:45:32.203 } 00:45:32.203 ] 00:45:32.203 }, 00:45:32.203 { 00:45:32.203 "subsystem": "vmd", 00:45:32.203 "config": [] 00:45:32.203 }, 00:45:32.203 { 00:45:32.203 "subsystem": "accel", 00:45:32.203 "config": [ 00:45:32.203 { 00:45:32.203 "method": "accel_set_options", 00:45:32.203 "params": { 00:45:32.203 "small_cache_size": 128, 00:45:32.203 "large_cache_size": 16, 00:45:32.203 "task_count": 2048, 00:45:32.203 "sequence_count": 2048, 00:45:32.203 "buf_count": 2048 00:45:32.203 } 00:45:32.203 } 00:45:32.203 ] 00:45:32.203 }, 00:45:32.203 { 00:45:32.203 "subsystem": "bdev", 00:45:32.203 "config": [ 00:45:32.203 { 00:45:32.203 "method": "bdev_set_options", 00:45:32.203 "params": { 00:45:32.203 "bdev_io_pool_size": 65535, 00:45:32.203 "bdev_io_cache_size": 256, 00:45:32.203 "bdev_auto_examine": true, 00:45:32.203 "iobuf_small_cache_size": 128, 00:45:32.203 "iobuf_large_cache_size": 16 00:45:32.203 } 00:45:32.203 }, 00:45:32.203 { 00:45:32.203 "method": "bdev_raid_set_options", 00:45:32.203 "params": { 00:45:32.203 "process_window_size_kb": 1024, 00:45:32.203 "process_max_bandwidth_mb_sec": 0 00:45:32.203 } 00:45:32.203 }, 00:45:32.203 { 00:45:32.203 "method": "bdev_iscsi_set_options", 00:45:32.203 "params": { 00:45:32.203 "timeout_sec": 30 00:45:32.203 } 00:45:32.203 }, 00:45:32.203 { 00:45:32.203 "method": "bdev_nvme_set_options", 00:45:32.203 "params": { 00:45:32.203 "action_on_timeout": "none", 00:45:32.203 "timeout_us": 0, 00:45:32.203 "timeout_admin_us": 0, 00:45:32.203 "keep_alive_timeout_ms": 10000, 00:45:32.203 "arbitration_burst": 0, 00:45:32.203 "low_priority_weight": 0, 00:45:32.203 "medium_priority_weight": 0, 00:45:32.203 "high_priority_weight": 0, 00:45:32.203 "nvme_adminq_poll_period_us": 10000, 00:45:32.203 "nvme_ioq_poll_period_us": 0, 00:45:32.203 "io_queue_requests": 512, 00:45:32.203 "delay_cmd_submit": true, 00:45:32.203 "transport_retry_count": 4, 00:45:32.203 "bdev_retry_count": 3, 00:45:32.203 "transport_ack_timeout": 0, 00:45:32.203 "ctrlr_loss_timeout_sec": 0, 00:45:32.203 "reconnect_delay_sec": 0, 00:45:32.203 "fast_io_fail_timeout_sec": 0, 00:45:32.203 "disable_auto_failback": false, 00:45:32.203 "generate_uuids": false, 00:45:32.203 "transport_tos": 0, 00:45:32.203 "nvme_error_stat": false, 00:45:32.203 "rdma_srq_size": 0, 00:45:32.203 "io_path_stat": false, 00:45:32.203 "allow_accel_sequence": false, 00:45:32.203 "rdma_max_cq_size": 0, 00:45:32.203 "rdma_cm_event_timeout_ms": 0, 00:45:32.203 "dhchap_digests": [ 00:45:32.203 "sha256", 00:45:32.203 "sha384", 00:45:32.203 "sha512" 00:45:32.203 ], 00:45:32.203 "dhchap_dhgroups": [ 00:45:32.203 "null", 00:45:32.203 "ffdhe2048", 00:45:32.203 "ffdhe3072", 00:45:32.203 "ffdhe4096", 00:45:32.203 "ffdhe6144", 00:45:32.203 "ffdhe8192" 00:45:32.203 ] 00:45:32.203 } 00:45:32.203 }, 00:45:32.203 { 00:45:32.203 "method": "bdev_nvme_attach_controller", 00:45:32.203 "params": { 00:45:32.203 "name": "nvme0", 00:45:32.203 "trtype": "TCP", 00:45:32.203 "adrfam": "IPv4", 00:45:32.203 "traddr": "127.0.0.1", 00:45:32.203 "trsvcid": "4420", 00:45:32.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:32.203 "prchk_reftag": false, 00:45:32.203 "prchk_guard": false, 00:45:32.203 "ctrlr_loss_timeout_sec": 0, 00:45:32.203 "reconnect_delay_sec": 0, 00:45:32.203 "fast_io_fail_timeout_sec": 0, 00:45:32.203 "psk": "key0", 00:45:32.203 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:32.203 "hdgst": false, 00:45:32.203 "ddgst": false, 00:45:32.203 "multipath": "multipath" 00:45:32.203 } 00:45:32.203 }, 00:45:32.203 { 00:45:32.203 "method": "bdev_nvme_set_hotplug", 00:45:32.203 "params": { 00:45:32.203 "period_us": 100000, 00:45:32.203 "enable": false 00:45:32.203 } 00:45:32.203 }, 00:45:32.203 { 00:45:32.203 "method": "bdev_wait_for_examine" 00:45:32.203 } 00:45:32.203 ] 00:45:32.203 }, 00:45:32.203 { 00:45:32.203 "subsystem": "nbd", 00:45:32.203 "config": [] 00:45:32.203 } 00:45:32.203 ] 00:45:32.203 }' 00:45:32.203 03:05:40 keyring_file -- keyring/file.sh@115 -- # killprocess 3226653 00:45:32.203 03:05:40 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3226653 ']' 00:45:32.203 03:05:40 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3226653 00:45:32.203 03:05:40 keyring_file -- common/autotest_common.sh@959 -- # uname 00:45:32.203 03:05:40 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:32.203 03:05:40 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3226653 00:45:32.203 03:05:40 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:32.203 03:05:40 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:32.203 03:05:40 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3226653' 00:45:32.203 killing process with pid 3226653 00:45:32.203 03:05:40 keyring_file -- common/autotest_common.sh@973 -- # kill 3226653 00:45:32.203 Received shutdown signal, test time was about 1.000000 seconds 00:45:32.203 00:45:32.203 Latency(us) 00:45:32.203 [2024-11-17T02:05:40.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:32.203 [2024-11-17T02:05:40.663Z] =================================================================================================================== 00:45:32.203 [2024-11-17T02:05:40.663Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:32.203 03:05:40 keyring_file -- common/autotest_common.sh@978 -- # wait 3226653 00:45:33.139 03:05:41 keyring_file -- keyring/file.sh@118 -- # bperfpid=3228262 00:45:33.139 03:05:41 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3228262 /var/tmp/bperf.sock 00:45:33.139 03:05:41 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3228262 ']' 00:45:33.139 03:05:41 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:33.139 03:05:41 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:45:33.139 03:05:41 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:33.139 03:05:41 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:33.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:33.139 03:05:41 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:33.139 03:05:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:33.139 03:05:41 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:45:33.139 "subsystems": [ 00:45:33.139 { 00:45:33.139 "subsystem": "keyring", 00:45:33.139 "config": [ 00:45:33.139 { 00:45:33.139 "method": "keyring_file_add_key", 00:45:33.139 "params": { 00:45:33.139 "name": "key0", 00:45:33.139 "path": "/tmp/tmp.QsLIP4BsFY" 00:45:33.139 } 00:45:33.139 }, 00:45:33.139 { 00:45:33.139 "method": "keyring_file_add_key", 00:45:33.139 "params": { 00:45:33.139 "name": "key1", 00:45:33.139 "path": "/tmp/tmp.Y16BVkTqwp" 00:45:33.139 } 00:45:33.139 } 00:45:33.139 ] 00:45:33.139 }, 00:45:33.139 { 00:45:33.139 "subsystem": "iobuf", 00:45:33.139 "config": [ 00:45:33.139 { 00:45:33.139 "method": "iobuf_set_options", 00:45:33.139 "params": { 00:45:33.139 "small_pool_count": 8192, 00:45:33.139 "large_pool_count": 1024, 00:45:33.139 "small_bufsize": 8192, 00:45:33.139 "large_bufsize": 135168, 00:45:33.139 "enable_numa": false 00:45:33.139 } 00:45:33.139 } 00:45:33.139 ] 00:45:33.139 }, 00:45:33.139 { 00:45:33.139 "subsystem": "sock", 00:45:33.139 "config": [ 00:45:33.139 { 00:45:33.139 "method": "sock_set_default_impl", 00:45:33.139 "params": { 00:45:33.139 "impl_name": "posix" 00:45:33.139 } 00:45:33.139 }, 00:45:33.139 { 00:45:33.139 "method": "sock_impl_set_options", 00:45:33.139 "params": { 00:45:33.139 "impl_name": "ssl", 00:45:33.139 "recv_buf_size": 4096, 00:45:33.139 "send_buf_size": 4096, 00:45:33.139 "enable_recv_pipe": true, 00:45:33.139 "enable_quickack": false, 00:45:33.139 "enable_placement_id": 0, 00:45:33.139 "enable_zerocopy_send_server": true, 00:45:33.139 "enable_zerocopy_send_client": false, 00:45:33.139 "zerocopy_threshold": 0, 00:45:33.139 "tls_version": 0, 00:45:33.139 "enable_ktls": false 00:45:33.139 } 00:45:33.139 }, 00:45:33.139 { 00:45:33.139 "method": "sock_impl_set_options", 00:45:33.139 "params": { 00:45:33.139 "impl_name": "posix", 00:45:33.139 "recv_buf_size": 2097152, 00:45:33.139 "send_buf_size": 2097152, 00:45:33.139 "enable_recv_pipe": true, 00:45:33.139 "enable_quickack": false, 00:45:33.139 "enable_placement_id": 0, 00:45:33.139 "enable_zerocopy_send_server": true, 00:45:33.139 "enable_zerocopy_send_client": false, 00:45:33.139 "zerocopy_threshold": 0, 00:45:33.139 "tls_version": 0, 00:45:33.139 "enable_ktls": false 00:45:33.139 } 00:45:33.139 } 00:45:33.139 ] 00:45:33.139 }, 00:45:33.139 { 00:45:33.139 "subsystem": "vmd", 00:45:33.139 "config": [] 00:45:33.139 }, 00:45:33.139 { 00:45:33.139 "subsystem": "accel", 00:45:33.139 "config": [ 00:45:33.139 { 00:45:33.139 "method": "accel_set_options", 00:45:33.139 "params": { 00:45:33.139 "small_cache_size": 128, 00:45:33.139 "large_cache_size": 16, 00:45:33.139 "task_count": 2048, 00:45:33.139 "sequence_count": 2048, 00:45:33.139 "buf_count": 2048 00:45:33.139 } 00:45:33.139 } 00:45:33.139 ] 00:45:33.139 }, 00:45:33.139 { 00:45:33.139 "subsystem": "bdev", 00:45:33.139 "config": [ 00:45:33.139 { 00:45:33.139 "method": "bdev_set_options", 00:45:33.139 "params": { 00:45:33.139 "bdev_io_pool_size": 65535, 00:45:33.139 "bdev_io_cache_size": 256, 00:45:33.139 "bdev_auto_examine": true, 00:45:33.139 "iobuf_small_cache_size": 128, 00:45:33.139 "iobuf_large_cache_size": 16 00:45:33.139 } 00:45:33.139 }, 00:45:33.139 { 00:45:33.139 "method": "bdev_raid_set_options", 00:45:33.139 "params": { 00:45:33.139 "process_window_size_kb": 1024, 00:45:33.139 "process_max_bandwidth_mb_sec": 0 00:45:33.139 } 00:45:33.139 }, 00:45:33.139 { 00:45:33.140 "method": "bdev_iscsi_set_options", 00:45:33.140 "params": { 00:45:33.140 "timeout_sec": 30 00:45:33.140 } 00:45:33.140 }, 00:45:33.140 { 00:45:33.140 "method": "bdev_nvme_set_options", 00:45:33.140 "params": { 00:45:33.140 "action_on_timeout": "none", 00:45:33.140 "timeout_us": 0, 00:45:33.140 "timeout_admin_us": 0, 00:45:33.140 "keep_alive_timeout_ms": 10000, 00:45:33.140 "arbitration_burst": 0, 00:45:33.140 "low_priority_weight": 0, 00:45:33.140 "medium_priority_weight": 0, 00:45:33.140 "high_priority_weight": 0, 00:45:33.140 "nvme_adminq_poll_period_us": 10000, 00:45:33.140 "nvme_ioq_poll_period_us": 0, 00:45:33.140 "io_queue_requests": 512, 00:45:33.140 "delay_cmd_submit": true, 00:45:33.140 "transport_retry_count": 4, 00:45:33.140 "bdev_retry_count": 3, 00:45:33.140 "transport_ack_timeout": 0, 00:45:33.140 "ctrlr_loss_timeout_sec": 0, 00:45:33.140 "reconnect_delay_sec": 0, 00:45:33.140 "fast_io_fail_timeout_sec": 0, 00:45:33.140 "disable_auto_failback": false, 00:45:33.140 "generate_uuids": false, 00:45:33.140 "transport_tos": 0, 00:45:33.140 "nvme_error_stat": false, 00:45:33.140 "rdma_srq_size": 0, 00:45:33.140 "io_path_stat": false, 00:45:33.140 "allow_accel_sequence": false, 00:45:33.140 "rdma_max_cq_size": 0, 00:45:33.140 "rdma_cm_event_timeout_ms": 0, 00:45:33.140 "dhchap_digests": [ 00:45:33.140 "sha256", 00:45:33.140 "sha384", 00:45:33.140 "sha512" 00:45:33.140 ], 00:45:33.140 "dhchap_dhgroups": [ 00:45:33.140 "null", 00:45:33.140 "ffdhe2048", 00:45:33.140 "ffdhe3072", 00:45:33.140 "ffdhe4096", 00:45:33.140 "ffdhe6144", 00:45:33.140 "ffdhe8192" 00:45:33.140 ] 00:45:33.140 } 00:45:33.140 }, 00:45:33.140 { 00:45:33.140 "method": "bdev_nvme_attach_controller", 00:45:33.140 "params": { 00:45:33.140 "name": "nvme0", 00:45:33.140 "trtype": "TCP", 00:45:33.140 "adrfam": "IPv4", 00:45:33.140 "traddr": "127.0.0.1", 00:45:33.140 "trsvcid": "4420", 00:45:33.140 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:33.140 "prchk_reftag": false, 00:45:33.140 "prchk_guard": false, 00:45:33.140 "ctrlr_loss_timeout_sec": 0, 00:45:33.140 "reconnect_delay_sec": 0, 00:45:33.140 "fast_io_fail_timeout_sec": 0, 00:45:33.140 "psk": "key0", 00:45:33.140 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:33.140 "hdgst": false, 00:45:33.140 "ddgst": false, 00:45:33.140 "multipath": "multipath" 00:45:33.140 } 00:45:33.140 }, 00:45:33.140 { 00:45:33.140 "method": "bdev_nvme_set_hotplug", 00:45:33.140 "params": { 00:45:33.140 "period_us": 100000, 00:45:33.140 "enable": false 00:45:33.140 } 00:45:33.140 }, 00:45:33.140 { 00:45:33.140 "method": "bdev_wait_for_examine" 00:45:33.140 } 00:45:33.140 ] 00:45:33.140 }, 00:45:33.140 { 00:45:33.140 "subsystem": "nbd", 00:45:33.140 "config": [] 00:45:33.140 } 00:45:33.140 ] 00:45:33.140 }' 00:45:33.140 [2024-11-17 03:05:41.553371] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:45:33.140 [2024-11-17 03:05:41.553540] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3228262 ] 00:45:33.399 [2024-11-17 03:05:41.698520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:33.399 [2024-11-17 03:05:41.830311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:33.965 [2024-11-17 03:05:42.285261] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:34.223 03:05:42 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:34.223 03:05:42 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:45:34.223 03:05:42 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:45:34.223 03:05:42 keyring_file -- keyring/file.sh@121 -- # jq length 00:45:34.223 03:05:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:34.481 03:05:42 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:45:34.481 03:05:42 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:45:34.481 03:05:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:34.481 03:05:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:34.481 03:05:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:34.481 03:05:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:34.481 03:05:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:34.739 03:05:43 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:45:34.739 03:05:43 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:45:34.739 03:05:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:34.739 03:05:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:34.739 03:05:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:34.739 03:05:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:34.739 03:05:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:34.997 03:05:43 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:45:34.997 03:05:43 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:45:34.997 03:05:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:45:34.997 03:05:43 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:45:35.256 03:05:43 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:45:35.256 03:05:43 keyring_file -- keyring/file.sh@1 -- # cleanup 00:45:35.256 03:05:43 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.QsLIP4BsFY /tmp/tmp.Y16BVkTqwp 00:45:35.256 03:05:43 keyring_file -- keyring/file.sh@20 -- # killprocess 3228262 00:45:35.256 03:05:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3228262 ']' 00:45:35.256 03:05:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3228262 00:45:35.256 03:05:43 keyring_file -- common/autotest_common.sh@959 -- # uname 00:45:35.256 03:05:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:35.256 03:05:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3228262 00:45:35.256 03:05:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:35.256 03:05:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:35.256 03:05:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3228262' 00:45:35.256 killing process with pid 3228262 00:45:35.256 03:05:43 keyring_file -- common/autotest_common.sh@973 -- # kill 3228262 00:45:35.256 Received shutdown signal, test time was about 1.000000 seconds 00:45:35.256 00:45:35.256 Latency(us) 00:45:35.256 [2024-11-17T02:05:43.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:35.256 [2024-11-17T02:05:43.716Z] =================================================================================================================== 00:45:35.256 [2024-11-17T02:05:43.716Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:45:35.256 03:05:43 keyring_file -- common/autotest_common.sh@978 -- # wait 3228262 00:45:36.190 03:05:44 keyring_file -- keyring/file.sh@21 -- # killprocess 3226512 00:45:36.190 03:05:44 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3226512 ']' 00:45:36.190 03:05:44 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3226512 00:45:36.190 03:05:44 keyring_file -- common/autotest_common.sh@959 -- # uname 00:45:36.190 03:05:44 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:36.190 03:05:44 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3226512 00:45:36.190 03:05:44 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:36.190 03:05:44 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:36.190 03:05:44 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3226512' 00:45:36.190 killing process with pid 3226512 00:45:36.190 03:05:44 keyring_file -- common/autotest_common.sh@973 -- # kill 3226512 00:45:36.190 03:05:44 keyring_file -- common/autotest_common.sh@978 -- # wait 3226512 00:45:38.721 00:45:38.721 real 0m20.235s 00:45:38.721 user 0m45.859s 00:45:38.721 sys 0m3.685s 00:45:38.721 03:05:47 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:38.721 03:05:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:38.721 ************************************ 00:45:38.721 END TEST keyring_file 00:45:38.721 ************************************ 00:45:38.721 03:05:47 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:45:38.721 03:05:47 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:45:38.721 03:05:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:45:38.721 03:05:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:38.721 03:05:47 -- common/autotest_common.sh@10 -- # set +x 00:45:38.721 ************************************ 00:45:38.721 START TEST keyring_linux 00:45:38.721 ************************************ 00:45:38.721 03:05:47 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:45:38.721 Joined session keyring: 330277976 00:45:38.721 * Looking for test storage... 00:45:38.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:38.721 03:05:47 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:38.721 03:05:47 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:45:38.721 03:05:47 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:38.981 03:05:47 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@345 -- # : 1 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:45:38.981 03:05:47 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:38.982 03:05:47 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:45:38.982 03:05:47 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:45:38.982 03:05:47 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:38.982 03:05:47 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:38.982 03:05:47 keyring_linux -- scripts/common.sh@368 -- # return 0 00:45:38.982 03:05:47 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:38.982 03:05:47 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:38.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:38.982 --rc genhtml_branch_coverage=1 00:45:38.982 --rc genhtml_function_coverage=1 00:45:38.982 --rc genhtml_legend=1 00:45:38.982 --rc geninfo_all_blocks=1 00:45:38.982 --rc geninfo_unexecuted_blocks=1 00:45:38.982 00:45:38.982 ' 00:45:38.982 03:05:47 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:38.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:38.982 --rc genhtml_branch_coverage=1 00:45:38.982 --rc genhtml_function_coverage=1 00:45:38.982 --rc genhtml_legend=1 00:45:38.982 --rc geninfo_all_blocks=1 00:45:38.982 --rc geninfo_unexecuted_blocks=1 00:45:38.982 00:45:38.982 ' 00:45:38.982 03:05:47 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:38.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:38.982 --rc genhtml_branch_coverage=1 00:45:38.982 --rc genhtml_function_coverage=1 00:45:38.982 --rc genhtml_legend=1 00:45:38.982 --rc geninfo_all_blocks=1 00:45:38.982 --rc geninfo_unexecuted_blocks=1 00:45:38.982 00:45:38.982 ' 00:45:38.982 03:05:47 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:38.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:38.982 --rc genhtml_branch_coverage=1 00:45:38.982 --rc genhtml_function_coverage=1 00:45:38.982 --rc genhtml_legend=1 00:45:38.982 --rc geninfo_all_blocks=1 00:45:38.982 --rc geninfo_unexecuted_blocks=1 00:45:38.982 00:45:38.982 ' 00:45:38.982 03:05:47 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:38.982 03:05:47 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:38.982 03:05:47 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:45:38.982 03:05:47 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:38.982 03:05:47 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:38.982 03:05:47 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:38.982 03:05:47 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:38.982 03:05:47 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:38.982 03:05:47 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:38.982 03:05:47 keyring_linux -- paths/export.sh@5 -- # export PATH 00:45:38.982 03:05:47 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:38.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:38.982 03:05:47 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:38.982 03:05:47 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:38.982 03:05:47 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:38.982 03:05:47 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:45:38.982 03:05:47 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:45:38.982 03:05:47 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:45:38.982 03:05:47 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:45:38.982 03:05:47 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:45:38.982 03:05:47 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:45:38.982 03:05:47 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:38.982 03:05:47 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:45:38.982 03:05:47 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:45:38.982 03:05:47 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@733 -- # python - 00:45:38.982 03:05:47 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:45:38.982 03:05:47 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:45:38.982 /tmp/:spdk-test:key0 00:45:38.982 03:05:47 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:45:38.982 03:05:47 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:45:38.982 03:05:47 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:45:38.982 03:05:47 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:38.982 03:05:47 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:45:38.982 03:05:47 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:45:38.982 03:05:47 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:45:38.982 03:05:47 keyring_linux -- nvmf/common.sh@733 -- # python - 00:45:38.982 03:05:47 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:45:38.982 03:05:47 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:45:38.982 /tmp/:spdk-test:key1 00:45:38.982 03:05:47 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3229104 00:45:38.982 03:05:47 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:38.982 03:05:47 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3229104 00:45:38.982 03:05:47 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3229104 ']' 00:45:38.982 03:05:47 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:38.982 03:05:47 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:38.982 03:05:47 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:38.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:38.982 03:05:47 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:38.982 03:05:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:38.982 [2024-11-17 03:05:47.429691] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:45:38.983 [2024-11-17 03:05:47.429826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3229104 ] 00:45:39.241 [2024-11-17 03:05:47.576593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:39.500 [2024-11-17 03:05:47.714476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:40.436 03:05:48 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:40.436 03:05:48 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:45:40.436 03:05:48 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:45:40.436 03:05:48 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:40.436 03:05:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:40.436 [2024-11-17 03:05:48.669114] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:40.436 null0 00:45:40.436 [2024-11-17 03:05:48.701153] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:40.436 [2024-11-17 03:05:48.701761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:40.436 03:05:48 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:40.436 03:05:48 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:45:40.436 734155356 00:45:40.436 03:05:48 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:45:40.436 192066295 00:45:40.436 03:05:48 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3229287 00:45:40.436 03:05:48 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:45:40.436 03:05:48 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3229287 /var/tmp/bperf.sock 00:45:40.436 03:05:48 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3229287 ']' 00:45:40.436 03:05:48 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:40.436 03:05:48 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:40.436 03:05:48 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:40.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:40.436 03:05:48 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:40.436 03:05:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:40.436 [2024-11-17 03:05:48.808360] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:45:40.436 [2024-11-17 03:05:48.808528] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3229287 ] 00:45:40.695 [2024-11-17 03:05:48.955246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:40.695 [2024-11-17 03:05:49.090541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:41.626 03:05:49 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:41.626 03:05:49 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:45:41.626 03:05:49 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:45:41.626 03:05:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:45:41.626 03:05:50 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:45:41.626 03:05:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:45:42.560 03:05:50 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:45:42.560 03:05:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:45:42.560 [2024-11-17 03:05:50.928370] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:42.560 nvme0n1 00:45:42.818 03:05:51 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:45:42.818 03:05:51 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:45:42.818 03:05:51 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:45:42.818 03:05:51 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:45:42.818 03:05:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:42.818 03:05:51 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:45:43.076 03:05:51 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:45:43.076 03:05:51 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:45:43.076 03:05:51 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:45:43.076 03:05:51 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:45:43.076 03:05:51 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:43.076 03:05:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:43.076 03:05:51 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:45:43.335 03:05:51 keyring_linux -- keyring/linux.sh@25 -- # sn=734155356 00:45:43.335 03:05:51 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:45:43.335 03:05:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:45:43.335 03:05:51 keyring_linux -- keyring/linux.sh@26 -- # [[ 734155356 == \7\3\4\1\5\5\3\5\6 ]] 00:45:43.335 03:05:51 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 734155356 00:45:43.335 03:05:51 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:45:43.335 03:05:51 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:43.335 Running I/O for 1 seconds... 00:45:44.270 7203.00 IOPS, 28.14 MiB/s 00:45:44.270 Latency(us) 00:45:44.270 [2024-11-17T02:05:52.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:44.270 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:45:44.270 nvme0n1 : 1.02 7222.89 28.21 0.00 0.00 17573.28 11699.39 30680.56 00:45:44.270 [2024-11-17T02:05:52.730Z] =================================================================================================================== 00:45:44.270 [2024-11-17T02:05:52.730Z] Total : 7222.89 28.21 0.00 0.00 17573.28 11699.39 30680.56 00:45:44.270 { 00:45:44.270 "results": [ 00:45:44.270 { 00:45:44.270 "job": "nvme0n1", 00:45:44.270 "core_mask": "0x2", 00:45:44.270 "workload": "randread", 00:45:44.270 "status": "finished", 00:45:44.270 "queue_depth": 128, 00:45:44.270 "io_size": 4096, 00:45:44.270 "runtime": 1.015106, 00:45:44.270 "iops": 7222.891008426706, 00:45:44.270 "mibps": 28.21441800166682, 00:45:44.270 "io_failed": 0, 00:45:44.270 "io_timeout": 0, 00:45:44.270 "avg_latency_us": 17573.280895516356, 00:45:44.270 "min_latency_us": 11699.38962962963, 00:45:44.270 "max_latency_us": 30680.557037037037 00:45:44.270 } 00:45:44.270 ], 00:45:44.270 "core_count": 1 00:45:44.270 } 00:45:44.270 03:05:52 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:44.270 03:05:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:44.837 03:05:52 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:45:44.837 03:05:52 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:45:44.837 03:05:52 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:45:44.837 03:05:52 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:45:44.837 03:05:52 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:45:44.837 03:05:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:44.837 03:05:53 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:45:44.837 03:05:53 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:45:44.837 03:05:53 keyring_linux -- keyring/linux.sh@23 -- # return 00:45:44.837 03:05:53 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:44.837 03:05:53 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:45:44.837 03:05:53 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:44.837 03:05:53 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:44.837 03:05:53 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:44.837 03:05:53 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:44.837 03:05:53 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:44.838 03:05:53 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:44.838 03:05:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:45.096 [2024-11-17 03:05:53.523983] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:45.096 [2024-11-17 03:05:53.524169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (107): Transport endpoint is not connected 00:45:45.096 [2024-11-17 03:05:53.525138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (9): Bad file descriptor 00:45:45.096 [2024-11-17 03:05:53.526117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:45:45.096 [2024-11-17 03:05:53.526170] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:45.096 [2024-11-17 03:05:53.526193] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:45.096 [2024-11-17 03:05:53.526216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:45:45.096 request: 00:45:45.096 { 00:45:45.096 "name": "nvme0", 00:45:45.096 "trtype": "tcp", 00:45:45.096 "traddr": "127.0.0.1", 00:45:45.096 "adrfam": "ipv4", 00:45:45.096 "trsvcid": "4420", 00:45:45.096 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:45.096 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:45.096 "prchk_reftag": false, 00:45:45.096 "prchk_guard": false, 00:45:45.096 "hdgst": false, 00:45:45.096 "ddgst": false, 00:45:45.096 "psk": ":spdk-test:key1", 00:45:45.096 "allow_unrecognized_csi": false, 00:45:45.096 "method": "bdev_nvme_attach_controller", 00:45:45.096 "req_id": 1 00:45:45.096 } 00:45:45.096 Got JSON-RPC error response 00:45:45.096 response: 00:45:45.096 { 00:45:45.096 "code": -5, 00:45:45.096 "message": "Input/output error" 00:45:45.096 } 00:45:45.096 03:05:53 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:45:45.096 03:05:53 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:45.096 03:05:53 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:45.096 03:05:53 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:45.096 03:05:53 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:45:45.096 03:05:53 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:45:45.096 03:05:53 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:45:45.096 03:05:53 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:45:45.096 03:05:53 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:45:45.096 03:05:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:45:45.096 03:05:53 keyring_linux -- keyring/linux.sh@33 -- # sn=734155356 00:45:45.096 03:05:53 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 734155356 00:45:45.096 1 links removed 00:45:45.096 03:05:53 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:45:45.096 03:05:53 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:45:45.096 03:05:53 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:45:45.096 03:05:53 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:45:45.096 03:05:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:45:45.355 03:05:53 keyring_linux -- keyring/linux.sh@33 -- # sn=192066295 00:45:45.356 03:05:53 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 192066295 00:45:45.356 1 links removed 00:45:45.356 03:05:53 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3229287 00:45:45.356 03:05:53 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3229287 ']' 00:45:45.356 03:05:53 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3229287 00:45:45.356 03:05:53 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:45:45.356 03:05:53 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:45.356 03:05:53 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3229287 00:45:45.356 03:05:53 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:45.356 03:05:53 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:45.356 03:05:53 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3229287' 00:45:45.356 killing process with pid 3229287 00:45:45.356 03:05:53 keyring_linux -- common/autotest_common.sh@973 -- # kill 3229287 00:45:45.356 Received shutdown signal, test time was about 1.000000 seconds 00:45:45.356 00:45:45.356 Latency(us) 00:45:45.356 [2024-11-17T02:05:53.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:45.356 [2024-11-17T02:05:53.816Z] =================================================================================================================== 00:45:45.356 [2024-11-17T02:05:53.816Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:45.356 03:05:53 keyring_linux -- common/autotest_common.sh@978 -- # wait 3229287 00:45:46.291 03:05:54 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3229104 00:45:46.291 03:05:54 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3229104 ']' 00:45:46.291 03:05:54 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3229104 00:45:46.291 03:05:54 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:45:46.291 03:05:54 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:46.291 03:05:54 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3229104 00:45:46.291 03:05:54 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:46.291 03:05:54 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:46.291 03:05:54 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3229104' 00:45:46.291 killing process with pid 3229104 00:45:46.291 03:05:54 keyring_linux -- common/autotest_common.sh@973 -- # kill 3229104 00:45:46.291 03:05:54 keyring_linux -- common/autotest_common.sh@978 -- # wait 3229104 00:45:48.823 00:45:48.823 real 0m9.869s 00:45:48.823 user 0m16.958s 00:45:48.823 sys 0m1.960s 00:45:48.823 03:05:56 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:48.823 03:05:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:48.823 ************************************ 00:45:48.823 END TEST keyring_linux 00:45:48.823 ************************************ 00:45:48.823 03:05:56 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:45:48.823 03:05:56 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:45:48.823 03:05:56 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:45:48.823 03:05:56 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:45:48.823 03:05:56 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:45:48.823 03:05:56 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:45:48.823 03:05:56 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:45:48.823 03:05:56 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:45:48.823 03:05:56 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:45:48.823 03:05:56 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:45:48.823 03:05:56 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:45:48.823 03:05:56 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:45:48.823 03:05:56 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:45:48.823 03:05:56 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:45:48.823 03:05:56 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:45:48.823 03:05:56 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:45:48.823 03:05:56 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:45:48.823 03:05:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:48.823 03:05:56 -- common/autotest_common.sh@10 -- # set +x 00:45:48.823 03:05:56 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:45:48.823 03:05:56 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:45:48.823 03:05:56 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:45:48.823 03:05:56 -- common/autotest_common.sh@10 -- # set +x 00:45:50.725 INFO: APP EXITING 00:45:50.725 INFO: killing all VMs 00:45:50.725 INFO: killing vhost app 00:45:50.725 INFO: EXIT DONE 00:45:51.667 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:45:51.667 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:45:51.667 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:45:51.667 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:45:51.667 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:45:51.667 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:45:51.667 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:45:51.667 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:45:51.667 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:45:51.667 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:45:51.667 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:45:51.667 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:45:51.667 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:45:51.667 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:45:51.667 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:45:51.667 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:45:51.667 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:45:53.113 Cleaning 00:45:53.113 Removing: /var/run/dpdk/spdk0/config 00:45:53.113 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:45:53.113 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:45:53.113 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:45:53.113 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:45:53.113 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:45:53.113 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:45:53.113 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:45:53.113 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:45:53.113 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:45:53.113 Removing: /var/run/dpdk/spdk0/hugepage_info 00:45:53.113 Removing: /var/run/dpdk/spdk1/config 00:45:53.113 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:45:53.113 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:45:53.113 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:45:53.113 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:45:53.113 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:45:53.113 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:45:53.113 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:45:53.113 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:45:53.113 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:45:53.113 Removing: /var/run/dpdk/spdk1/hugepage_info 00:45:53.113 Removing: /var/run/dpdk/spdk2/config 00:45:53.113 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:45:53.113 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:45:53.113 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:45:53.113 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:45:53.113 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:45:53.113 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:45:53.113 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:45:53.113 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:45:53.113 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:45:53.113 Removing: /var/run/dpdk/spdk2/hugepage_info 00:45:53.113 Removing: /var/run/dpdk/spdk3/config 00:45:53.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:45:53.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:45:53.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:45:53.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:45:53.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:45:53.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:45:53.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:45:53.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:45:53.113 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:45:53.113 Removing: /var/run/dpdk/spdk3/hugepage_info 00:45:53.113 Removing: /var/run/dpdk/spdk4/config 00:45:53.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:45:53.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:45:53.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:45:53.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:45:53.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:45:53.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:45:53.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:45:53.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:45:53.113 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:45:53.113 Removing: /var/run/dpdk/spdk4/hugepage_info 00:45:53.113 Removing: /dev/shm/bdev_svc_trace.1 00:45:53.113 Removing: /dev/shm/nvmf_trace.0 00:45:53.113 Removing: /dev/shm/spdk_tgt_trace.pid2816123 00:45:53.113 Removing: /var/run/dpdk/spdk0 00:45:53.113 Removing: /var/run/dpdk/spdk1 00:45:53.113 Removing: /var/run/dpdk/spdk2 00:45:53.113 Removing: /var/run/dpdk/spdk3 00:45:53.113 Removing: /var/run/dpdk/spdk4 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2813228 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2814359 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2816123 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2816866 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2817810 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2818233 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2819215 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2819356 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2819915 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2821470 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2822537 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2823252 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2823742 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2824334 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2824928 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2825086 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2825372 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2825569 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2826019 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2828775 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2829332 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2829772 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2829924 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2831263 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2831406 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2832644 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2832900 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2833337 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2833475 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2833898 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2834047 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2835200 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2835413 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2835691 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2838833 00:45:53.113 Removing: /var/run/dpdk/spdk_pid2841616 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2848883 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2849409 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2852073 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2852352 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2855278 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2859264 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2861714 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2869060 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2875212 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2876662 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2877461 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2888516 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2891081 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2949032 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2952465 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2956558 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2962902 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2992873 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2996066 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2997247 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2998696 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2998980 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2999253 00:45:53.114 Removing: /var/run/dpdk/spdk_pid2999532 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3000370 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3001848 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3003213 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3003908 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3005796 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3006608 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3007432 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3010120 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3014538 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3014539 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3014540 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3016905 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3019362 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3022894 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3047023 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3050049 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3054088 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3055558 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3057201 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3058776 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3061840 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3064949 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3067587 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3072312 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3072353 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3075509 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3075710 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3076105 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3076675 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3076726 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3077876 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3079179 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3080358 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3081535 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3082716 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3084011 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3087953 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3088410 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3089811 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3090667 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3094664 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3096763 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3100573 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3104784 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3111661 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3116405 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3116413 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3129431 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3130102 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3130763 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3131370 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3132405 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3132949 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3133670 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3134273 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3137674 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3137945 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3141994 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3142188 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3145802 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3148560 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3155703 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3156116 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3158763 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3159040 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3161930 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3165898 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3168805 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3175850 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3181442 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3182860 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3183591 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3194624 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3197149 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3199286 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3205343 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3205466 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3208501 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3210017 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3211537 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3212525 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3214054 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3215043 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3220718 00:45:53.114 Removing: /var/run/dpdk/spdk_pid3221116 00:45:53.373 Removing: /var/run/dpdk/spdk_pid3221505 00:45:53.373 Removing: /var/run/dpdk/spdk_pid3223288 00:45:53.373 Removing: /var/run/dpdk/spdk_pid3223670 00:45:53.373 Removing: /var/run/dpdk/spdk_pid3224068 00:45:53.373 Removing: /var/run/dpdk/spdk_pid3226512 00:45:53.373 Removing: /var/run/dpdk/spdk_pid3226653 00:45:53.373 Removing: /var/run/dpdk/spdk_pid3228262 00:45:53.373 Removing: /var/run/dpdk/spdk_pid3229104 00:45:53.373 Removing: /var/run/dpdk/spdk_pid3229287 00:45:53.373 Clean 00:45:53.373 03:06:01 -- common/autotest_common.sh@1453 -- # return 0 00:45:53.373 03:06:01 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:45:53.373 03:06:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:53.373 03:06:01 -- common/autotest_common.sh@10 -- # set +x 00:45:53.373 03:06:01 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:45:53.373 03:06:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:53.373 03:06:01 -- common/autotest_common.sh@10 -- # set +x 00:45:53.373 03:06:01 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:53.373 03:06:01 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:45:53.373 03:06:01 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:45:53.373 03:06:01 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:45:53.373 03:06:01 -- spdk/autotest.sh@398 -- # hostname 00:45:53.373 03:06:01 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:45:53.631 geninfo: WARNING: invalid characters removed from testname! 00:46:25.706 03:06:31 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:27.613 03:06:35 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:30.150 03:06:38 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:33.439 03:06:41 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:35.978 03:06:44 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:39.268 03:06:47 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:41.805 03:06:49 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:46:41.805 03:06:49 -- spdk/autorun.sh@1 -- $ timing_finish 00:46:41.805 03:06:49 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:46:41.805 03:06:49 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:46:41.805 03:06:49 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:46:41.805 03:06:49 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:46:41.805 + [[ -n 2742129 ]] 00:46:41.805 + sudo kill 2742129 00:46:41.815 [Pipeline] } 00:46:41.832 [Pipeline] // stage 00:46:41.837 [Pipeline] } 00:46:41.851 [Pipeline] // timeout 00:46:41.856 [Pipeline] } 00:46:41.870 [Pipeline] // catchError 00:46:41.875 [Pipeline] } 00:46:41.890 [Pipeline] // wrap 00:46:41.896 [Pipeline] } 00:46:41.908 [Pipeline] // catchError 00:46:41.916 [Pipeline] stage 00:46:41.919 [Pipeline] { (Epilogue) 00:46:41.932 [Pipeline] catchError 00:46:41.933 [Pipeline] { 00:46:41.946 [Pipeline] echo 00:46:41.948 Cleanup processes 00:46:41.954 [Pipeline] sh 00:46:42.239 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:42.239 3243446 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:42.253 [Pipeline] sh 00:46:42.536 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:42.536 ++ grep -v 'sudo pgrep' 00:46:42.536 ++ awk '{print $1}' 00:46:42.536 + sudo kill -9 00:46:42.536 + true 00:46:42.548 [Pipeline] sh 00:46:42.830 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:46:55.054 [Pipeline] sh 00:46:55.339 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:46:55.339 Artifacts sizes are good 00:46:55.354 [Pipeline] archiveArtifacts 00:46:55.361 Archiving artifacts 00:46:55.539 [Pipeline] sh 00:46:55.864 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:46:55.878 [Pipeline] cleanWs 00:46:55.888 [WS-CLEANUP] Deleting project workspace... 00:46:55.888 [WS-CLEANUP] Deferred wipeout is used... 00:46:55.894 [WS-CLEANUP] done 00:46:55.896 [Pipeline] } 00:46:55.913 [Pipeline] // catchError 00:46:55.925 [Pipeline] sh 00:46:56.206 + logger -p user.info -t JENKINS-CI 00:46:56.214 [Pipeline] } 00:46:56.227 [Pipeline] // stage 00:46:56.232 [Pipeline] } 00:46:56.246 [Pipeline] // node 00:46:56.252 [Pipeline] End of Pipeline 00:46:56.298 Finished: SUCCESS